Files
vllm/requirements.txt

18 lines
482 B
Plaintext
Raw Normal View History

2024-03-18 18:38:33 -04:00
cmake>=3.21
ninja # For faster builds.
psutil
ray >= 2.9
sentencepiece # Required for LLaMA tokenizer.
numpy
torch == 2.1.2
transformers >= 4.38.0 # Required for Gemma.
2023-12-17 02:28:02 -08:00
xformers == 0.0.23.post1 # Required for CUDA 12.1.
fastapi
uvicorn[standard]
2024-01-22 01:05:56 +01:00
pydantic >= 2.0 # Required for OpenAI server.
prometheus_client >= 0.18.0
pynvml == 11.5.0
triton >= 2.1.0
outlines == 0.0.34
2024-02-14 10:17:57 -08:00
cupy-cuda12x == 12.1.0 # Required for CUDA graphs. CUDA 11.8 users should install cupy-cuda11x instead.