Files
vllm/docs/getting_started/installation/gpu.md
vllmellm aaa2efbe98 [DOC] [ROCm] Update docker deployment doc (#33971)
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: TJian <tunjian.tan@embeddedllm.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
2026-02-06 10:05:35 -08:00

144 lines
3.5 KiB
Markdown

---
toc_depth: 3
---
# GPU
vLLM is a Python library that supports the following GPU variants. Select your GPU type to see vendor specific instructions:
=== "NVIDIA CUDA"
--8<-- "docs/getting_started/installation/gpu.cuda.inc.md:installation"
=== "AMD ROCm"
--8<-- "docs/getting_started/installation/gpu.rocm.inc.md:installation"
=== "Intel XPU"
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:installation"
## Requirements
- OS: Linux
- Python: 3.10 -- 3.13
!!! note
vLLM does not support Windows natively. To run vLLM on Windows, you can use the Windows Subsystem for Linux (WSL) with a compatible Linux distribution, or use some community-maintained forks, e.g. [https://github.com/SystemPanic/vllm-windows](https://github.com/SystemPanic/vllm-windows).
=== "NVIDIA CUDA"
--8<-- "docs/getting_started/installation/gpu.cuda.inc.md:requirements"
=== "AMD ROCm"
--8<-- "docs/getting_started/installation/gpu.rocm.inc.md:requirements"
=== "Intel XPU"
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:requirements"
## Set up using Python
### Create a new Python environment
--8<-- "docs/getting_started/installation/python_env_setup.inc.md"
=== "NVIDIA CUDA"
--8<-- "docs/getting_started/installation/gpu.cuda.inc.md:set-up-using-python"
=== "AMD ROCm"
--8<-- "docs/getting_started/installation/gpu.rocm.inc.md:set-up-using-python"
=== "Intel XPU"
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:set-up-using-python"
### Pre-built wheels {#pre-built-wheels}
=== "NVIDIA CUDA"
--8<-- "docs/getting_started/installation/gpu.cuda.inc.md:pre-built-wheels"
=== "AMD ROCm"
--8<-- "docs/getting_started/installation/gpu.rocm.inc.md:pre-built-wheels"
=== "Intel XPU"
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:pre-built-wheels"
### Build wheel from source
=== "NVIDIA CUDA"
--8<-- "docs/getting_started/installation/gpu.cuda.inc.md:build-wheel-from-source"
=== "AMD ROCm"
--8<-- "docs/getting_started/installation/gpu.rocm.inc.md:build-wheel-from-source"
=== "Intel XPU"
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:build-wheel-from-source"
## Set up using Docker
### Pre-built images
<!-- markdownlint-disable MD025 -->
# --8<-- [start:pre-built-images]
=== "NVIDIA CUDA"
--8<-- "docs/getting_started/installation/gpu.cuda.inc.md:pre-built-images"
=== "AMD ROCm"
--8<-- "docs/getting_started/installation/gpu.rocm.inc.md:pre-built-images"
=== "Intel XPU"
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:pre-built-images"
# --8<-- [end:pre-built-images]
<!-- markdownlint-enable MD025 -->
<!-- markdownlint-disable MD001 -->
### Build image from source
<!-- markdownlint-enable MD001 -->
<!-- markdownlint-disable MD025 -->
# --8<-- [start:build-image-from-source]
=== "NVIDIA CUDA"
--8<-- "docs/getting_started/installation/gpu.cuda.inc.md:build-image-from-source"
=== "AMD ROCm"
--8<-- "docs/getting_started/installation/gpu.rocm.inc.md:build-image-from-source"
=== "Intel XPU"
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:build-image-from-source"
# --8<-- [end:build-image-from-source]
<!-- markdownlint-enable MD025 -->
## Supported features
=== "NVIDIA CUDA"
--8<-- "docs/getting_started/installation/gpu.cuda.inc.md:supported-features"
=== "AMD ROCm"
--8<-- "docs/getting_started/installation/gpu.rocm.inc.md:supported-features"
=== "Intel XPU"
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:supported-features"