Move requirements into their own directory (#12547)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-03-08 17:44:35 +01:00
committed by GitHub
parent e02883c400
commit 206e2577fa
50 changed files with 125 additions and 128 deletions

View File

@@ -63,7 +63,7 @@ To build and install vLLM from source, run:
```console
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -r requirements-hpu.txt
pip install -r requirements/hpu.txt
python setup.py develop
```
@@ -73,7 +73,7 @@ Currently, the latest features and performance optimizations are developed in Ga
git clone https://github.com/HabanaAI/vllm-fork.git
cd vllm-fork
git checkout habana_main
pip install -r requirements-hpu.txt
pip install -r requirements/hpu.txt
python setup.py develop
```

View File

@@ -116,7 +116,7 @@ Once neuronx-cc and transformers-neuronx packages are installed, we will be able
```console
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -U -r requirements-neuron.txt
pip install -U -r requirements/neuron.txt
VLLM_TARGET_DEVICE="neuron" pip install .
```

View File

@@ -32,7 +32,7 @@ Second, clone vLLM and install prerequisites for the vLLM OpenVINO backend insta
```console
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -r requirements-build.txt --extra-index-url https://download.pytorch.org/whl/cpu
pip install -r requirements/build.txt --extra-index-url https://download.pytorch.org/whl/cpu
```
Finally, install vLLM with OpenVINO backend:

View File

@@ -151,7 +151,7 @@ pip uninstall torch torch-xla -y
Install build dependencies:
```bash
pip install -r requirements-tpu.txt
pip install -r requirements/tpu.txt
sudo apt-get install libopenblas-base libopenmpi-dev libomp-dev
```

View File

@@ -25,7 +25,7 @@ After installation of XCode and the Command Line Tools, which include Apple Clan
```console
git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -r requirements-cpu.txt
pip install -r requirements/cpu.txt
pip install -e .
```

View File

@@ -18,7 +18,7 @@ Third, install Python packages for vLLM CPU backend building:
```console
pip install --upgrade pip
pip install "cmake>=3.26" wheel packaging ninja "setuptools-scm>=8" numpy
pip install -v -r requirements-cpu.txt --extra-index-url https://download.pytorch.org/whl/cpu
pip install -v -r requirements/cpu.txt --extra-index-url https://download.pytorch.org/whl/cpu
```
Finally, build and install vLLM CPU backend:

View File

@@ -148,7 +148,7 @@ To build vLLM using an existing PyTorch installation:
git clone https://github.com/vllm-project/vllm.git
cd vllm
python use_existing_torch.py
pip install -r requirements-build.txt
pip install -r requirements/build.txt
pip install -e . --no-build-isolation
```

View File

@@ -84,7 +84,7 @@ Currently, there are no pre-built ROCm wheels.
# Install dependencies
$ pip install --upgrade numba scipy huggingface-hub[cli,hf_transfer] setuptools_scm
$ pip install "numpy<2"
$ pip install -r requirements-rocm.txt
$ pip install -r requirements/rocm.txt
# Build vLLM for MI210/MI250/MI300.
$ export PYTORCH_ROCM_ARCH="gfx90a;gfx942"

View File

@@ -25,7 +25,7 @@ Currently, there are no pre-built XPU wheels.
```console
source /opt/intel/oneapi/setvars.sh
pip install --upgrade pip
pip install -v -r requirements-xpu.txt
pip install -v -r requirements/xpu.txt
```
- Finally, build and install vLLM XPU backend: