[Docs] Make installation URLs nicer (#14556)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-03-10 18:43:08 +01:00
committed by GitHub
parent 3b352a2f92
commit bc2d4473bf
8 changed files with 75 additions and 75 deletions

View File

@@ -9,7 +9,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
:selected:
:sync: tpu
:::{include} tpu.inc.md
:::{include} ai_accelerator/tpu.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -19,7 +19,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Intel Gaudi
:sync: hpu-gaudi
:::{include} hpu-gaudi.inc.md
:::{include} ai_accelerator/hpu-gaudi.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -29,7 +29,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} AWS Neuron
:sync: neuron
:::{include} neuron.inc.md
:::{include} ai_accelerator/neuron.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -39,7 +39,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} OpenVINO
:sync: openvino
:::{include} openvino.inc.md
:::{include} ai_accelerator/openvino.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -56,7 +56,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Google TPU
:sync: tpu
:::{include} tpu.inc.md
:::{include} ai_accelerator/tpu.inc.md
:start-after: "## Requirements"
:end-before: "## Configure a new environment"
:::
@@ -66,7 +66,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Intel Gaudi
:sync: hpu-gaudi
:::{include} hpu-gaudi.inc.md
:::{include} ai_accelerator/hpu-gaudi.inc.md
:start-after: "## Requirements"
:end-before: "## Configure a new environment"
:::
@@ -76,7 +76,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} AWS Neuron
:sync: neuron
:::{include} neuron.inc.md
:::{include} ai_accelerator/neuron.inc.md
:start-after: "## Requirements"
:end-before: "## Configure a new environment"
:::
@@ -86,7 +86,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} OpenVINO
:sync: openvino
:::{include} openvino.inc.md
:::{include} ai_accelerator/openvino.inc.md
:start-after: "## Requirements"
:end-before: "## Set up using Python"
:::
@@ -103,7 +103,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Google TPU
:sync: tpu
:::{include} tpu.inc.md
:::{include} ai_accelerator/tpu.inc.md
:start-after: "## Configure a new environment"
:end-before: "## Set up using Python"
:::
@@ -113,7 +113,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Intel Gaudi
:sync: hpu-gaudi
:::{include} hpu-gaudi.inc.md
:::{include} ai_accelerator/hpu-gaudi.inc.md
:start-after: "## Configure a new environment"
:end-before: "## Set up using Python"
:::
@@ -123,7 +123,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} AWS Neuron
:sync: neuron
:::{include} neuron.inc.md
:::{include} ai_accelerator/neuron.inc.md
:start-after: "## Configure a new environment"
:end-before: "## Set up using Python"
:::
@@ -133,7 +133,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} OpenVINO
:sync: openvino
:::{include} ../python_env_setup.inc.md
:::{include} python_env_setup.inc.md
:::
::::
@@ -150,7 +150,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Google TPU
:sync: tpu
:::{include} tpu.inc.md
:::{include} ai_accelerator/tpu.inc.md
:start-after: "### Pre-built wheels"
:end-before: "### Build wheel from source"
:::
@@ -160,7 +160,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Intel Gaudi
:sync: hpu-gaudi
:::{include} hpu-gaudi.inc.md
:::{include} ai_accelerator/hpu-gaudi.inc.md
:start-after: "### Pre-built wheels"
:end-before: "### Build wheel from source"
:::
@@ -170,7 +170,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} AWS Neuron
:sync: neuron
:::{include} neuron.inc.md
:::{include} ai_accelerator/neuron.inc.md
:start-after: "### Pre-built wheels"
:end-before: "### Build wheel from source"
:::
@@ -180,7 +180,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} OpenVINO
:sync: openvino
:::{include} openvino.inc.md
:::{include} ai_accelerator/openvino.inc.md
:start-after: "### Pre-built wheels"
:end-before: "### Build wheel from source"
:::
@@ -197,7 +197,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Google TPU
:sync: tpu
:::{include} tpu.inc.md
:::{include} ai_accelerator/tpu.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::
@@ -207,7 +207,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Intel Gaudi
:sync: hpu-gaudi
:::{include} hpu-gaudi.inc.md
:::{include} ai_accelerator/hpu-gaudi.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::
@@ -217,7 +217,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} AWS Neuron
:sync: neuron
:::{include} neuron.inc.md
:::{include} ai_accelerator/neuron.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::
@@ -227,7 +227,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} OpenVINO
:sync: openvino
:::{include} openvino.inc.md
:::{include} ai_accelerator/openvino.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::
@@ -246,7 +246,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Google TPU
:sync: tpu
:::{include} tpu.inc.md
:::{include} ai_accelerator/tpu.inc.md
:start-after: "### Pre-built images"
:end-before: "### Build image from source"
:::
@@ -256,7 +256,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Intel Gaudi
:sync: hpu-gaudi
:::{include} hpu-gaudi.inc.md
:::{include} ai_accelerator/hpu-gaudi.inc.md
:start-after: "### Pre-built images"
:end-before: "### Build image from source"
:::
@@ -266,7 +266,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} AWS Neuron
:sync: neuron
:::{include} neuron.inc.md
:::{include} ai_accelerator/neuron.inc.md
:start-after: "### Pre-built images"
:end-before: "### Build image from source"
:::
@@ -276,7 +276,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} OpenVINO
:sync: openvino
:::{include} openvino.inc.md
:::{include} ai_accelerator/openvino.inc.md
:start-after: "### Pre-built images"
:end-before: "### Build image from source"
:::
@@ -293,7 +293,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Google TPU
:sync: tpu
:::{include} tpu.inc.md
:::{include} ai_accelerator/tpu.inc.md
:start-after: "### Build image from source"
:end-before: "## Extra information"
:::
@@ -303,7 +303,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Intel Gaudi
:sync: hpu-gaudi
:::{include} hpu-gaudi.inc.md
:::{include} ai_accelerator/hpu-gaudi.inc.md
:start-after: "### Build image from source"
:end-before: "## Extra information"
:::
@@ -313,7 +313,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} AWS Neuron
:sync: neuron
:::{include} neuron.inc.md
:::{include} ai_accelerator/neuron.inc.md
:start-after: "### Build image from source"
:end-before: "## Extra information"
:::
@@ -323,7 +323,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} OpenVINO
:sync: openvino
:::{include} openvino.inc.md
:::{include} ai_accelerator/openvino.inc.md
:start-after: "### Build image from source"
:end-before: "## Extra information"
:::
@@ -340,7 +340,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Google TPU
:sync: tpu
:::{include} tpu.inc.md
:::{include} ai_accelerator/tpu.inc.md
:start-after: "## Extra information"
:::
@@ -349,7 +349,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} Intel Gaudi
:sync: hpu-gaudi
:::{include} hpu-gaudi.inc.md
:::{include} ai_accelerator/hpu-gaudi.inc.md
:start-after: "## Extra information"
:::
@@ -358,7 +358,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} AWS Neuron
:sync: neuron
:::{include} neuron.inc.md
:::{include} ai_accelerator/neuron.inc.md
:start-after: "## Extra information"
:::
@@ -367,7 +367,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you
::::{tab-item} OpenVINO
:sync: openvino
:::{include} openvino.inc.md
:::{include} ai_accelerator/openvino.inc.md
:start-after: "## Extra information"
:::

View File

@@ -9,7 +9,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C
:selected:
:sync: x86
:::{include} x86.inc.md
:::{include} cpu/x86.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -19,7 +19,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C
::::{tab-item} ARM AArch64
:sync: arm
:::{include} arm.inc.md
:::{include} cpu/arm.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -29,7 +29,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C
::::{tab-item} Apple silicon
:sync: apple
:::{include} apple.inc.md
:::{include} cpu/apple.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -48,7 +48,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C
::::{tab-item} Intel/AMD x86
:sync: x86
:::{include} x86.inc.md
:::{include} cpu/x86.inc.md
:start-after: "## Requirements"
:end-before: "## Set up using Python"
:::
@@ -58,7 +58,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C
::::{tab-item} ARM AArch64
:sync: arm
:::{include} arm.inc.md
:::{include} cpu/arm.inc.md
:start-after: "## Requirements"
:end-before: "## Set up using Python"
:::
@@ -68,7 +68,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C
::::{tab-item} Apple silicon
:sync: apple
:::{include} apple.inc.md
:::{include} cpu/apple.inc.md
:start-after: "## Requirements"
:end-before: "## Set up using Python"
:::
@@ -81,7 +81,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C
### Create a new Python environment
:::{include} ../python_env_setup.inc.md
:::{include} python_env_setup.inc.md
:::
### Pre-built wheels
@@ -96,7 +96,7 @@ Currently, there are no pre-built CPU wheels.
::::{tab-item} Intel/AMD x86
:sync: x86
:::{include} x86.inc.md
:::{include} cpu/x86.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::
@@ -106,7 +106,7 @@ Currently, there are no pre-built CPU wheels.
::::{tab-item} ARM AArch64
:sync: arm
:::{include} arm.inc.md
:::{include} cpu/arm.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::
@@ -116,7 +116,7 @@ Currently, there are no pre-built CPU wheels.
::::{tab-item} Apple silicon
:sync: apple
:::{include} apple.inc.md
:::{include} cpu/apple.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::

View File

@@ -20,7 +20,7 @@ There are no pre-built wheels or images for this device, so you must build vLLM
### Build wheel from source
:::{include} build.inc.md
:::{include} cpu/build.inc.md
:::
Testing has been conducted on AWS Graviton3 instances for compatibility.

View File

@@ -22,7 +22,7 @@ There are no pre-built wheels or images for this device, so you must build vLLM
### Build wheel from source
:::{include} build.inc.md
:::{include} cpu/build.inc.md
:::
:::{note}

View File

@@ -9,7 +9,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G
:selected:
:sync: cuda
:::{include} cuda.inc.md
:::{include} gpu/cuda.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -19,7 +19,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G
::::{tab-item} AMD ROCm
:sync: rocm
:::{include} rocm.inc.md
:::{include} gpu/rocm.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -29,7 +29,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G
::::{tab-item} Intel XPU
:sync: xpu
:::{include} xpu.inc.md
:::{include} gpu/xpu.inc.md
:start-after: "# Installation"
:end-before: "## Requirements"
:::
@@ -49,7 +49,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G
::::{tab-item} NVIDIA CUDA
:sync: cuda
:::{include} cuda.inc.md
:::{include} gpu/cuda.inc.md
:start-after: "## Requirements"
:end-before: "## Set up using Python"
:::
@@ -59,7 +59,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G
::::{tab-item} AMD ROCm
:sync: rocm
:::{include} rocm.inc.md
:::{include} gpu/rocm.inc.md
:start-after: "## Requirements"
:end-before: "## Set up using Python"
:::
@@ -69,7 +69,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G
::::{tab-item} Intel XPU
:sync: xpu
:::{include} xpu.inc.md
:::{include} gpu/xpu.inc.md
:start-after: "## Requirements"
:end-before: "## Set up using Python"
:::
@@ -82,7 +82,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G
### Create a new Python environment
:::{include} ../python_env_setup.inc.md
:::{include} python_env_setup.inc.md
:::
:::::{tab-set}
@@ -91,7 +91,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G
::::{tab-item} NVIDIA CUDA
:sync: cuda
:::{include} cuda.inc.md
:::{include} gpu/cuda.inc.md
:start-after: "## Create a new Python environment"
:end-before: "### Pre-built wheels"
:::
@@ -122,7 +122,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} NVIDIA CUDA
:sync: cuda
:::{include} cuda.inc.md
:::{include} gpu/cuda.inc.md
:start-after: "### Pre-built wheels"
:end-before: "### Build wheel from source"
:::
@@ -132,7 +132,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} AMD ROCm
:sync: rocm
:::{include} rocm.inc.md
:::{include} gpu/rocm.inc.md
:start-after: "### Pre-built wheels"
:end-before: "### Build wheel from source"
:::
@@ -142,7 +142,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} Intel XPU
:sync: xpu
:::{include} xpu.inc.md
:::{include} gpu/xpu.inc.md
:start-after: "### Pre-built wheels"
:end-before: "### Build wheel from source"
:::
@@ -161,7 +161,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} NVIDIA CUDA
:sync: cuda
:::{include} cuda.inc.md
:::{include} gpu/cuda.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::
@@ -171,7 +171,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} AMD ROCm
:sync: rocm
:::{include} rocm.inc.md
:::{include} gpu/rocm.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::
@@ -181,7 +181,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} Intel XPU
:sync: xpu
:::{include} xpu.inc.md
:::{include} gpu/xpu.inc.md
:start-after: "### Build wheel from source"
:end-before: "## Set up using Docker"
:::
@@ -200,7 +200,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} NVIDIA CUDA
:sync: cuda
:::{include} cuda.inc.md
:::{include} gpu/cuda.inc.md
:start-after: "### Pre-built images"
:end-before: "### Build image from source"
:::
@@ -210,7 +210,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} AMD ROCm
:sync: rocm
:::{include} rocm.inc.md
:::{include} gpu/rocm.inc.md
:start-after: "### Pre-built images"
:end-before: "### Build image from source"
:::
@@ -220,7 +220,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} Intel XPU
:sync: xpu
:::{include} xpu.inc.md
:::{include} gpu/xpu.inc.md
:start-after: "### Pre-built images"
:end-before: "### Build image from source"
:::
@@ -237,7 +237,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} NVIDIA CUDA
:sync: cuda
:::{include} cuda.inc.md
:::{include} gpu/cuda.inc.md
:start-after: "### Build image from source"
:end-before: "## Supported features"
:::
@@ -247,7 +247,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} AMD ROCm
:sync: rocm
:::{include} rocm.inc.md
:::{include} gpu/rocm.inc.md
:start-after: "### Build image from source"
:end-before: "## Supported features"
:::
@@ -257,7 +257,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} Intel XPU
:sync: xpu
:::{include} xpu.inc.md
:::{include} gpu/xpu.inc.md
:start-after: "### Build image from source"
:end-before: "## Supported features"
:::
@@ -274,7 +274,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} NVIDIA CUDA
:sync: cuda
:::{include} cuda.inc.md
:::{include} gpu/cuda.inc.md
:start-after: "## Supported features"
:::
@@ -283,7 +283,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} AMD ROCm
:sync: rocm
:::{include} rocm.inc.md
:::{include} gpu/rocm.inc.md
:start-after: "## Supported features"
:::
@@ -292,7 +292,7 @@ There is no extra information on creating a new Python environment for this devi
::::{tab-item} Intel XPU
:sync: xpu
:::{include} xpu.inc.md
:::{include} gpu/xpu.inc.md
:start-after: "## Supported features"
:::

View File

@@ -1,28 +0,0 @@
(installation-index)=
# Installation
vLLM supports the following hardware platforms:
:::{toctree}
:maxdepth: 1
:hidden:
gpu/index
cpu/index
ai_accelerator/index
:::
- <project:gpu/index.md>
- NVIDIA CUDA
- AMD ROCm
- Intel XPU
- <project:cpu/index.md>
- Intel/AMD x86
- ARM AArch64
- Apple silicon
- <project:ai_accelerator/index.md>
- Google TPU
- Intel Gaudi
- AWS Neuron
- OpenVINO