Remove unnecessary explicit title anchors and use relative links instead (#20620)
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
@@ -1,7 +1,6 @@
|
||||
---
|
||||
title: Quickstart
|
||||
---
|
||||
[](){ #quickstart }
|
||||
|
||||
This guide will help you quickly get started with vLLM to perform:
|
||||
|
||||
@@ -43,7 +42,7 @@ uv pip install vllm --torch-backend=auto
|
||||
```
|
||||
|
||||
!!! note
|
||||
For more detail and non-CUDA platforms, please refer [here][installation-index] for specific instructions on how to install vLLM.
|
||||
For more detail and non-CUDA platforms, please refer [here](installation/README.md) for specific instructions on how to install vLLM.
|
||||
|
||||
[](){ #quickstart-offline }
|
||||
|
||||
@@ -77,7 +76,7 @@ prompts = [
|
||||
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
|
||||
```
|
||||
|
||||
The [LLM][vllm.LLM] class initializes vLLM's engine and the [OPT-125M model](https://arxiv.org/abs/2205.01068) for offline inference. The list of supported models can be found [here][supported-models].
|
||||
The [LLM][vllm.LLM] class initializes vLLM's engine and the [OPT-125M model](https://arxiv.org/abs/2205.01068) for offline inference. The list of supported models can be found [here](../models/supported_models.md).
|
||||
|
||||
```python
|
||||
llm = LLM(model="facebook/opt-125m")
|
||||
|
||||
Reference in New Issue
Block a user