[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
# SPDX-License-Identifier: Apache-2.0
|
2025-06-03 11:20:17 -07:00
|
|
|
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
|
2025-07-04 03:05:49 -04:00
|
|
|
from __future__ import annotations
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
|
2025-02-24 11:29:41 -05:00
|
|
|
import random
|
2025-10-05 17:37:55 +01:00
|
|
|
from typing import TYPE_CHECKING
|
2025-02-24 11:29:41 -05:00
|
|
|
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
import pytest
|
|
|
|
|
|
2025-07-04 03:05:49 -04:00
|
|
|
from vllm import LLM
|
2025-09-18 05:20:27 -04:00
|
|
|
from vllm.sampling_params import SamplingParams, StructuredOutputsParams
|
2025-05-27 10:37:06 +01:00
|
|
|
from vllm.v1.metrics.reader import Counter, Gauge, Histogram, Metric, Vector
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
|
2025-07-04 03:05:49 -04:00
|
|
|
if TYPE_CHECKING:
|
|
|
|
|
from tests.conftest import VllmRunner
|
|
|
|
|
|
2025-02-24 11:29:41 -05:00
|
|
|
MODEL = "facebook/opt-125m"
|
|
|
|
|
DTYPE = "half"
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
|
|
|
|
|
|
2025-07-04 03:05:49 -04:00
|
|
|
def _vllm_model(
|
|
|
|
|
apc: bool,
|
|
|
|
|
vllm_runner: type[VllmRunner],
|
|
|
|
|
*,
|
|
|
|
|
skip_tokenizer_init: bool = False,
|
|
|
|
|
):
|
2025-02-24 11:29:41 -05:00
|
|
|
"""Set up VllmRunner instance."""
|
|
|
|
|
return vllm_runner(
|
|
|
|
|
MODEL,
|
|
|
|
|
dtype=DTYPE,
|
|
|
|
|
max_model_len=128,
|
|
|
|
|
enforce_eager=True,
|
|
|
|
|
enable_prefix_caching=apc,
|
|
|
|
|
gpu_memory_utilization=0.5,
|
2025-07-04 03:05:49 -04:00
|
|
|
skip_tokenizer_init=skip_tokenizer_init,
|
2025-02-24 11:29:41 -05:00
|
|
|
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture(
|
|
|
|
|
# Function scope decouples tests & allows
|
|
|
|
|
# env var adjustment via monkeypatch
|
|
|
|
|
scope="function",
|
|
|
|
|
# Prefix caching
|
2025-10-05 15:06:22 +01:00
|
|
|
params=[False, True],
|
|
|
|
|
)
|
2025-10-07 23:42:31 +08:00
|
|
|
def vllm_model(vllm_runner, request):
|
2025-02-24 11:29:41 -05:00
|
|
|
"""VllmRunner test fixture parameterized by APC True/False."""
|
2025-10-07 23:42:31 +08:00
|
|
|
with _vllm_model(request.param, vllm_runner) as vllm_model:
|
2025-02-24 11:29:41 -05:00
|
|
|
yield vllm_model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture(scope="function")
|
2025-10-07 23:42:31 +08:00
|
|
|
def vllm_model_apc(vllm_runner):
|
2025-02-24 11:29:41 -05:00
|
|
|
"""VllmRunner test fixture with APC."""
|
2025-10-07 23:42:31 +08:00
|
|
|
with _vllm_model(True, vllm_runner) as vllm_model:
|
2025-02-24 11:29:41 -05:00
|
|
|
yield vllm_model
|
|
|
|
|
|
|
|
|
|
|
2025-07-04 03:05:49 -04:00
|
|
|
@pytest.fixture(
|
|
|
|
|
# Function scope decouples tests & allows
|
|
|
|
|
# env var adjustment via monkeypatch
|
|
|
|
|
scope="function",
|
|
|
|
|
# Prefix caching
|
2025-10-05 15:06:22 +01:00
|
|
|
params=[False, True],
|
|
|
|
|
)
|
2025-10-07 23:42:31 +08:00
|
|
|
def vllm_model_skip_tokenizer_init(vllm_runner, request):
|
2025-07-04 03:05:49 -04:00
|
|
|
"""VllmRunner test fixture with APC."""
|
|
|
|
|
with _vllm_model(
|
2025-10-05 15:06:22 +01:00
|
|
|
request.param,
|
|
|
|
|
vllm_runner,
|
|
|
|
|
skip_tokenizer_init=True,
|
2025-07-04 03:05:49 -04:00
|
|
|
) as vllm_model:
|
|
|
|
|
yield vllm_model
|
|
|
|
|
|
|
|
|
|
|
2025-02-24 11:29:41 -05:00
|
|
|
def _get_test_sampling_params(
|
2025-03-03 01:34:51 +00:00
|
|
|
prompt_list: list[str],
|
2025-10-05 17:37:55 +01:00
|
|
|
seed: int | None = 42,
|
2025-07-04 03:05:49 -04:00
|
|
|
structured_outputs: bool = False,
|
2025-03-03 01:34:51 +00:00
|
|
|
) -> tuple[list[SamplingParams], list[int]]:
|
2025-02-24 11:29:41 -05:00
|
|
|
"""Generate random sampling params for a batch."""
|
|
|
|
|
|
|
|
|
|
def get_mostly_n_gt1() -> int:
|
2025-03-19 13:49:33 +08:00
|
|
|
r"""Mostly n \in [2,20], ~1/3 n=1"""
|
2025-02-24 11:29:41 -05:00
|
|
|
x = random.randint(0, 28)
|
|
|
|
|
if x < 10:
|
|
|
|
|
return 1
|
|
|
|
|
else:
|
|
|
|
|
return x - 8
|
|
|
|
|
|
|
|
|
|
n_list = [get_mostly_n_gt1() for _ in range(len(prompt_list))]
|
|
|
|
|
# High temperature to maximize the chance of unique completions
|
|
|
|
|
return [
|
2025-07-04 03:05:49 -04:00
|
|
|
SamplingParams(
|
|
|
|
|
temperature=0.95,
|
|
|
|
|
top_p=0.95,
|
|
|
|
|
n=n,
|
|
|
|
|
seed=seed,
|
2025-10-05 15:06:22 +01:00
|
|
|
structured_outputs=StructuredOutputsParams(regex="[0-9]+")
|
|
|
|
|
if structured_outputs
|
|
|
|
|
else None,
|
|
|
|
|
)
|
|
|
|
|
for n in n_list
|
2025-02-24 11:29:41 -05:00
|
|
|
], n_list
|
|
|
|
|
|
|
|
|
|
|
2025-07-04 03:05:49 -04:00
|
|
|
def test_compatibility_with_skip_tokenizer_init(
|
|
|
|
|
vllm_model_skip_tokenizer_init: VllmRunner,
|
|
|
|
|
example_prompts: list[str],
|
|
|
|
|
):
|
|
|
|
|
# Case 1: Structured output request should raise an error.
|
|
|
|
|
sampling_params_list, _ = _get_test_sampling_params(
|
|
|
|
|
example_prompts,
|
|
|
|
|
structured_outputs=True,
|
|
|
|
|
)
|
2025-07-21 19:18:33 +08:00
|
|
|
llm: LLM = vllm_model_skip_tokenizer_init.llm
|
2025-07-04 03:05:49 -04:00
|
|
|
with pytest.raises(ValueError):
|
2025-07-21 19:18:33 +08:00
|
|
|
_ = llm.generate(example_prompts, sampling_params_list)
|
2025-07-04 03:05:49 -04:00
|
|
|
|
|
|
|
|
|
2025-02-24 11:29:41 -05:00
|
|
|
def test_parallel_sampling(vllm_model, example_prompts) -> None:
|
|
|
|
|
"""Test passes if parallel sampling `n>1` yields `n` unique completions.
|
2025-07-04 03:05:49 -04:00
|
|
|
|
2025-02-24 11:29:41 -05:00
|
|
|
Args:
|
|
|
|
|
vllm_model: VllmRunner instance under test.
|
|
|
|
|
example_prompt: test fixture providing prompts for testing.
|
|
|
|
|
"""
|
|
|
|
|
sampling_params_list, n_list = _get_test_sampling_params(example_prompts)
|
2025-07-21 19:18:33 +08:00
|
|
|
llm: LLM = vllm_model.llm
|
|
|
|
|
outputs = llm.generate(example_prompts, sampling_params_list)
|
2025-02-24 11:29:41 -05:00
|
|
|
|
|
|
|
|
# Validate each request response
|
|
|
|
|
for out, n in zip(outputs, n_list):
|
2025-03-03 01:34:51 +00:00
|
|
|
completion_counts: dict[str, int] = {}
|
2025-02-24 11:29:41 -05:00
|
|
|
# Assert correct number of completions
|
2025-10-05 15:06:22 +01:00
|
|
|
assert len(out.outputs) == n, f"{len(out.outputs)} completions; {n} expected."
|
2025-02-24 11:29:41 -05:00
|
|
|
for idx in range(n):
|
|
|
|
|
comp = out.outputs[idx]
|
|
|
|
|
# Assert correct completion indices
|
2025-10-05 15:06:22 +01:00
|
|
|
assert comp.index == idx, f"Index {comp.index}; expected {idx}."
|
2025-02-24 11:29:41 -05:00
|
|
|
text = comp.text
|
|
|
|
|
completion_counts[text] = completion_counts.get(text, 0) + 1
|
|
|
|
|
# Assert unique completions
|
|
|
|
|
if len(completion_counts) != n:
|
2025-10-05 15:06:22 +01:00
|
|
|
repeats = {txt: num for (txt, num) in completion_counts.items() if num > 1}
|
2025-02-24 11:29:41 -05:00
|
|
|
raise AssertionError(
|
|
|
|
|
f"{len(completion_counts)} unique completions; expected"
|
2025-10-05 15:06:22 +01:00
|
|
|
f" {n}. Repeats: {repeats}"
|
|
|
|
|
)
|
2025-05-27 10:37:06 +01:00
|
|
|
|
|
|
|
|
|
2025-10-07 23:42:31 +08:00
|
|
|
def test_engine_metrics(vllm_runner, example_prompts):
|
2025-05-27 10:37:06 +01:00
|
|
|
max_tokens = 100
|
|
|
|
|
# Use spec decoding to test num_accepted_tokens_per_pos
|
|
|
|
|
speculative_config = {
|
|
|
|
|
"method": "ngram",
|
|
|
|
|
"prompt_lookup_max": 5,
|
|
|
|
|
"prompt_lookup_min": 3,
|
|
|
|
|
"num_speculative_tokens": 5,
|
|
|
|
|
}
|
2025-10-07 23:42:31 +08:00
|
|
|
|
2025-05-27 10:37:06 +01:00
|
|
|
with vllm_runner(
|
2025-10-05 15:06:22 +01:00
|
|
|
MODEL,
|
|
|
|
|
speculative_config=speculative_config,
|
|
|
|
|
disable_log_stats=False,
|
2025-05-27 10:37:06 +01:00
|
|
|
) as vllm_model:
|
2025-07-21 19:18:33 +08:00
|
|
|
llm: LLM = vllm_model.llm
|
2025-10-05 15:06:22 +01:00
|
|
|
sampling_params = SamplingParams(temperature=0.0, max_tokens=max_tokens)
|
2025-07-21 19:18:33 +08:00
|
|
|
outputs = llm.generate(example_prompts, sampling_params)
|
2025-05-27 10:37:06 +01:00
|
|
|
|
|
|
|
|
n_prompts = len(example_prompts)
|
|
|
|
|
assert len(outputs) == n_prompts
|
|
|
|
|
|
|
|
|
|
total_tokens = 0
|
|
|
|
|
for out in outputs:
|
|
|
|
|
assert len(out.outputs) == 1
|
|
|
|
|
total_tokens += len(out.outputs[0].token_ids)
|
|
|
|
|
assert total_tokens == max_tokens * n_prompts
|
|
|
|
|
|
2025-07-21 19:18:33 +08:00
|
|
|
metrics = llm.get_metrics()
|
2025-05-27 10:37:06 +01:00
|
|
|
|
|
|
|
|
def find_metric(name) -> list[Metric]:
|
|
|
|
|
found = []
|
|
|
|
|
for metric in metrics:
|
|
|
|
|
if metric.name == name:
|
|
|
|
|
found.append(metric)
|
|
|
|
|
return found
|
|
|
|
|
|
|
|
|
|
num_requests_running = find_metric("vllm:num_requests_running")
|
|
|
|
|
assert len(num_requests_running) == 1
|
|
|
|
|
assert isinstance(num_requests_running[0], Gauge)
|
2025-10-05 15:06:22 +01:00
|
|
|
assert num_requests_running[0].value == 0.0
|
2025-05-27 10:37:06 +01:00
|
|
|
|
|
|
|
|
generation_tokens = find_metric("vllm:generation_tokens")
|
|
|
|
|
assert len(generation_tokens) == 1
|
|
|
|
|
assert isinstance(generation_tokens[0], Counter)
|
|
|
|
|
assert generation_tokens[0].value == total_tokens
|
|
|
|
|
|
2025-10-05 15:06:22 +01:00
|
|
|
request_generation_tokens = find_metric("vllm:request_generation_tokens")
|
2025-05-27 10:37:06 +01:00
|
|
|
assert len(request_generation_tokens) == 1
|
|
|
|
|
assert isinstance(request_generation_tokens[0], Histogram)
|
|
|
|
|
assert "+Inf" in request_generation_tokens[0].buckets
|
|
|
|
|
assert request_generation_tokens[0].buckets["+Inf"] == n_prompts
|
|
|
|
|
assert request_generation_tokens[0].count == n_prompts
|
|
|
|
|
assert request_generation_tokens[0].sum == total_tokens
|
|
|
|
|
|
|
|
|
|
num_accepted_tokens_per_pos = find_metric(
|
2025-10-05 15:06:22 +01:00
|
|
|
"vllm:spec_decode_num_accepted_tokens_per_pos"
|
|
|
|
|
)
|
2025-05-27 10:37:06 +01:00
|
|
|
assert len(num_accepted_tokens_per_pos) == 1
|
|
|
|
|
assert isinstance(num_accepted_tokens_per_pos[0], Vector)
|
|
|
|
|
assert len(num_accepted_tokens_per_pos[0].values) == 5
|
2025-08-02 01:09:36 +08:00
|
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.mark.parametrize("model", ["meta-llama/Llama-3.2-1B-Instruct"])
|
2025-10-07 23:42:31 +08:00
|
|
|
def test_skip_tokenizer_initialization(model: str):
|
2025-08-02 01:09:36 +08:00
|
|
|
# This test checks if the flag skip_tokenizer_init skips the initialization
|
|
|
|
|
# of tokenizer and detokenizer. The generated output is expected to contain
|
|
|
|
|
# token ids.
|
|
|
|
|
llm = LLM(
|
|
|
|
|
model=model,
|
|
|
|
|
skip_tokenizer_init=True,
|
|
|
|
|
enforce_eager=True,
|
|
|
|
|
)
|
|
|
|
|
sampling_params = SamplingParams(prompt_logprobs=True, detokenize=True)
|
|
|
|
|
|
|
|
|
|
with pytest.raises(ValueError, match="cannot pass text prompts when"):
|
|
|
|
|
llm.generate("abc", sampling_params)
|
|
|
|
|
|
2025-10-05 15:06:22 +01:00
|
|
|
outputs = llm.generate(
|
|
|
|
|
{"prompt_token_ids": [1, 2, 3]}, sampling_params=sampling_params
|
|
|
|
|
)
|
2025-08-02 01:09:36 +08:00
|
|
|
assert len(outputs) > 0
|
|
|
|
|
completions = outputs[0].outputs
|
|
|
|
|
assert len(completions) > 0
|
|
|
|
|
assert completions[0].text == ""
|
|
|
|
|
assert completions[0].token_ids
|