Files
vllm/docs/serving/integrations/llamaindex.md
lif 00b814ba5a [V0 Deprecation] Remove unused swap_space parameter (#36216)
Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: mcelrath
2026-03-07 22:09:55 +08:00

571 B

LlamaIndex

vLLM is also available via LlamaIndex .

To install LlamaIndex, run

pip install llama-index-llms-vllm -q

To run inference on a single or multiple GPUs, use Vllm class from llamaindex.

from llama_index.llms.vllm import Vllm

llm = Vllm(
    model="microsoft/Orca-2-7b",
    tensor_parallel_size=4,
    max_new_tokens=100,
    vllm_kwargs={"gpu_memory_utilization": 0.5},
)

Please refer to this Tutorial for more details.