2025-07-08 11:27:40 +01:00
# Generative Models
2024-12-23 17:35:38 -05:00
vLLM provides first-class support for generative models, which covers most of LLMs.
2026-01-12 21:41:47 -06:00
In vLLM, generative models implement the [VllmModelForTextGeneration][vllm.model_executor.models.VllmModelForTextGeneration] interface.
2024-12-23 17:35:38 -05:00
Based on the final hidden states of the input, these models output log probabilities of the tokens to generate,
2025-09-24 20:30:33 +01:00
which are then passed through [Sampler][vllm.v1.sample.sampler.Sampler] to obtain the final text.
2024-12-23 17:35:38 -05:00
2025-07-28 10:42:40 +08:00
## Configuration
### Model Runner (`--runner`)
Run a model in generation mode via the option `--runner generate` .
!!! tip
There is no need to set this option in the vast majority of cases as vLLM can automatically
detect the model runner to use via `--runner auto` .
2025-01-10 11:25:20 +08:00
2024-12-23 17:35:38 -05:00
## Offline Inference
2025-05-23 11:09:53 +02:00
The [LLM][vllm.LLM] class provides various methods for offline inference.
2025-08-26 14:00:18 +01:00
See [configuration ](../api/README.md#configuration ) for a list of options when initializing the model.
2024-12-23 17:35:38 -05:00
### `LLM.generate`
2025-05-23 11:09:53 +02:00
The [generate][vllm.LLM.generate] method is available to all generative models in vLLM.
2024-12-23 17:35:38 -05:00
It is similar to [its counterpart in HF Transformers ](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationMixin.generate ),
except that tokenization and detokenization are also performed automatically.
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM
2024-12-23 17:35:38 -05:00
llm = LLM(model="facebook/opt-125m")
outputs = llm.generate("Hello, my name is")
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
2025-05-23 11:09:53 +02:00
You can optionally control the language generation by passing [SamplingParams][vllm.SamplingParams].
2025-01-10 11:25:20 +08:00
For example, you can use greedy sampling by setting `temperature=0` :
2024-12-23 17:35:38 -05:00
```python
2025-03-28 23:56:48 +08:00
from vllm import LLM, SamplingParams
2024-12-23 17:35:38 -05:00
llm = LLM(model="facebook/opt-125m")
params = SamplingParams(temperature=0)
outputs = llm.generate("Hello, my name is", params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
2025-06-11 16:39:58 +08:00
!!! important
2025-05-23 11:09:53 +02:00
By default, vLLM will use sampling parameters recommended by model creator by applying the `generation_config.json` from the huggingface model repository if it exists. In most cases, this will provide you with the best results by default if [SamplingParams][vllm.SamplingParams] is not specified.
2025-03-23 14:00:55 -07:00
2025-05-23 11:09:53 +02:00
However, if vLLM's default sampling parameters are preferred, please pass `generation_config="vllm"` when creating the [LLM][vllm.LLM] instance.
2025-10-17 04:05:34 +01:00
A code example can be found here: [examples/offline_inference/basic/basic.py ](../../examples/offline_inference/basic/basic.py )
2024-12-23 17:35:38 -05:00
### `LLM.beam_search`
2025-05-23 11:09:53 +02:00
The [beam_search][vllm.LLM.beam_search] method implements [beam search ](https://huggingface.co/docs/transformers/en/generation_strategies#beam-search ) on top of [generate][vllm.LLM.generate].
2024-12-23 17:35:38 -05:00
For example, to search using 5 beams and output at most 50 tokens:
```python
2025-03-06 18:37:10 +03:00
from vllm import LLM
from vllm.sampling_params import BeamSearchParams
2024-12-23 17:35:38 -05:00
llm = LLM(model="facebook/opt-125m")
params = BeamSearchParams(beam_width=5, max_tokens=50)
2025-03-06 18:37:10 +03:00
outputs = llm.beam_search([{"prompt": "Hello, my name is "}], params)
2024-12-23 17:35:38 -05:00
for output in outputs:
2025-03-06 18:37:10 +03:00
generated_text = output.sequences[0].text
print(f"Generated text: {generated_text!r}")
2024-12-23 17:35:38 -05:00
```
### `LLM.chat`
2025-05-23 11:09:53 +02:00
The [chat][vllm.LLM.chat] method implements chat functionality on top of [generate][vllm.LLM.generate].
2024-12-23 17:35:38 -05:00
In particular, it accepts input similar to [OpenAI Chat Completions API ](https://platform.openai.com/docs/api-reference/chat )
and automatically applies the model's [chat template ](https://huggingface.co/docs/transformers/en/chat_templating ) to format the prompt.
2025-06-11 16:39:58 +08:00
!!! important
2025-05-23 11:09:53 +02:00
In general, only instruction-tuned models have a chat template.
Base models may perform poorly as they are not trained to respond to the chat conversation.
2024-12-23 17:35:38 -05:00
2025-07-08 03:55:28 +01:00
??? code
2025-06-23 13:24:23 +08:00
```python
from vllm import LLM
llm = LLM(model="meta-llama/Meta-Llama-3-8B-Instruct")
conversation = [
{
"role": "system",
2025-10-15 16:25:49 +08:00
"content": "You are a helpful assistant",
2025-06-23 13:24:23 +08:00
},
{
"role": "user",
2025-10-15 16:25:49 +08:00
"content": "Hello",
2025-06-23 13:24:23 +08:00
},
{
"role": "assistant",
2025-10-15 16:25:49 +08:00
"content": "Hello! How can I assist you today?",
2025-06-23 13:24:23 +08:00
},
{
"role": "user",
"content": "Write an essay about the importance of higher education.",
},
]
outputs = llm.chat(conversation)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
2024-12-23 17:35:38 -05:00
2025-10-17 04:05:34 +01:00
A code example can be found here: [examples/offline_inference/basic/chat.py ](../../examples/offline_inference/basic/chat.py )
2024-12-23 17:35:38 -05:00
If the model doesn't have a chat template or you want to specify another one,
you can explicitly pass a chat template:
```python
from vllm.entrypoints.chat_utils import load_chat_template
# You can find a list of existing chat templates under `examples/`
custom_template = load_chat_template(chat_template="<path_to_template>")
print("Loaded chat template:", custom_template)
outputs = llm.chat(conversation, chat_template=custom_template)
```
2025-01-10 12:05:56 +00:00
## Online Serving
2024-12-23 17:35:38 -05:00
2025-07-08 10:49:13 +01:00
Our [OpenAI-Compatible Server ](../serving/openai_compatible_server.md ) provides endpoints that correspond to the offline APIs:
2024-12-23 17:35:38 -05:00
2025-10-17 10:22:06 +01:00
- [Completions API ](../serving/openai_compatible_server.md#completions-api ) is similar to `LLM.generate` but only accepts text.
- [Chat API ](../serving/openai_compatible_server.md#chat-api ) is similar to `LLM.chat` , accepting both text and [multi-modal inputs ](../features/multimodal_inputs.md ) for models with a chat template.