Write README and front page of doc (#147)

This commit is contained in:
Woosuk Kwon
2023-06-18 03:19:38 -07:00
committed by GitHub
parent bf5f121c02
commit dcda03b4cb
9 changed files with 65 additions and 60 deletions

View File

@@ -3,17 +3,20 @@
Installation
============
vLLM is a Python library that includes some C++ and CUDA code.
vLLM can run on systems that meet the following requirements:
vLLM is a Python library that also contains some C++ and CUDA code.
This additional code requires compilation on the user's machine.
Requirements
------------
* OS: Linux
* Python: 3.8 or higher
* CUDA: 11.0 -- 11.8
* GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, etc.)
* GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, etc.)
.. note::
As of now, vLLM does not support CUDA 12.
If you are using Hopper or Lovelace GPUs, please use CUDA 11.8.
If you are using Hopper or Lovelace GPUs, please use CUDA 11.8 instead of CUDA 12.
.. tip::
If you have trouble installing vLLM, we recommend using the NVIDIA PyTorch Docker image.
@@ -45,7 +48,7 @@ You can install vLLM using pip:
Build from source
-----------------
You can also build and install vLLM from source.
You can also build and install vLLM from source:
.. code-block:: console