Use CUDA 12.4 as default for release and nightly wheels (#12098)

This commit is contained in:
Michael Goin
2025-02-26 22:06:37 -05:00
committed by GitHub
parent a31614e386
commit ca377cf1b9
4 changed files with 25 additions and 9 deletions

View File

@@ -23,12 +23,12 @@ Therefore, it is recommended to install vLLM with a **fresh new** environment. I
You can install vLLM using either `pip` or `uv pip`:
```console
# Install vLLM with CUDA 12.1.
# Install vLLM with CUDA 12.4.
pip install vllm # If you are using pip.
uv pip install vllm # If you are using uv.
```
As of now, vLLM's binaries are compiled with CUDA 12.1 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 11.8 and public PyTorch release versions:
As of now, vLLM's binaries are compiled with CUDA 12.4 and public PyTorch release versions by default. We also provide vLLM binaries compiled with CUDA 12.1, 11.8, and public PyTorch release versions:
```console
# Install vLLM with CUDA 11.8.