Add documentation on how to do incremental builds (#2796)

This commit is contained in:
Philipp Moritz
2024-02-07 14:42:02 -08:00
committed by GitHub
parent c81dddb45c
commit 931746bc6d
2 changed files with 15 additions and 0 deletions

View File

@@ -67,3 +67,13 @@ You can also build and install vLLM from source:
$ # Use `--ipc=host` to make sure the shared memory is large enough.
$ docker run --gpus all -it --rm --ipc=host nvcr.io/nvidia/pytorch:23.10-py3
.. note::
If you are developing the C++ backend of vLLM, consider building vLLM with
.. code-block:: console
$ python setup.py develop
since it will give you incremental builds. The downside is that this method
is `deprecated by setuptools <https://github.com/pypa/setuptools/issues/917>`_.