[Docs] Clean up v1/metrics.md (#21449)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
This commit is contained in:
Michael Yao
2025-07-23 18:37:25 +08:00
committed by GitHub
parent 6929f8b437
commit 2cc5016a19

View File

@@ -5,17 +5,17 @@ Ensure the v1 LLM Engine exposes a superset of the metrics available in v0.
## Objectives ## Objectives
- Achieve parity of metrics between v0 and v1. - Achieve parity of metrics between v0 and v1.
- The priority use case is accessing these metrics via Prometheus as this is what we expect to be used in production environments. - The priority use case is accessing these metrics via Prometheus, as this is what we expect to be used in production environments.
- Logging support - i.e. printing metrics to the info log - is provided for more ad-hoc testing, debugging, development, and exploratory use cases. - Logging support (i.e. printing metrics to the info log) is provided for more ad-hoc testing, debugging, development, and exploratory use cases.
## Background ## Background
Metrics in vLLM can be categorized as follows: Metrics in vLLM can be categorized as follows:
1. Server-level metrics: these are global metrics that track the state and performance of the LLM engine. These are typically exposed as Gauges or Counters in Prometheus. 1. Server-level metrics: Global metrics that track the state and performance of the LLM engine. These are typically exposed as Gauges or Counters in Prometheus.
2. Request-level metrics: these are metrics that track the characteristics - e.g. size and timing - of individual requests. These are typically exposed as Histograms in Prometheus, and are often the SLO that an SRE monitoring vLLM will be tracking. 2. Request-level metrics: Metrics that track the characteristics (e.g. size and timing) of individual requests. These are typically exposed as Histograms in Prometheus and are often the SLOs that an SRE monitoring vLLM will be tracking.
The mental model is that the "Server-level Metrics" explain why the "Request-level Metrics" are what they are. The mental model is that server-level metrics help explain the values of request-level metrics.
### v0 Metrics ### v0 Metrics
@@ -65,20 +65,20 @@ vLLM also provides [a reference example](../../examples/online_serving/prometheu
The subset of metrics exposed in the Grafana dashboard gives us an indication of which metrics are especially important: The subset of metrics exposed in the Grafana dashboard gives us an indication of which metrics are especially important:
- `vllm:e2e_request_latency_seconds_bucket` - End to end request latency measured in seconds - `vllm:e2e_request_latency_seconds_bucket` - End to end request latency measured in seconds.
- `vllm:prompt_tokens_total` - Prompt Tokens - `vllm:prompt_tokens_total` - Prompt tokens.
- `vllm:generation_tokens_total` - Generation Tokens - `vllm:generation_tokens_total` - Generation tokens.
- `vllm:time_per_output_token_seconds` - Inter token latency (Time Per Output Token, TPOT) in second. - `vllm:time_per_output_token_seconds` - Inter-token latency (Time Per Output Token, TPOT) in seconds.
- `vllm:time_to_first_token_seconds` - Time to First Token (TTFT) latency in seconds. - `vllm:time_to_first_token_seconds` - Time to First Token (TTFT) latency in seconds.
- `vllm:num_requests_running` (also, `_swapped` and `_waiting`) - Number of requests in RUNNING, WAITING, and SWAPPED state - `vllm:num_requests_running` (also, `_swapped` and `_waiting`) - Number of requests in the RUNNING, WAITING, and SWAPPED states.
- `vllm:gpu_cache_usage_perc` - Percentage of used cache blocks by vLLM. - `vllm:gpu_cache_usage_perc` - Percentage of used cache blocks by vLLM.
- `vllm:request_prompt_tokens` - Request prompt length - `vllm:request_prompt_tokens` - Request prompt length.
- `vllm:request_generation_tokens` - request generation length - `vllm:request_generation_tokens` - Request generation length.
- `vllm:request_success_total` - Number of finished requests by their finish reason: either an EOS token was generated or the max sequence length was reached - `vllm:request_success_total` - Number of finished requests by their finish reason: either an EOS token was generated or the max sequence length was reached.
- `vllm:request_queue_time_seconds` - Queue Time - `vllm:request_queue_time_seconds` - Queue time.
- `vllm:request_prefill_time_seconds` - Requests Prefill Time - `vllm:request_prefill_time_seconds` - Requests prefill time.
- `vllm:request_decode_time_seconds` - Requests Decode Time - `vllm:request_decode_time_seconds` - Requests decode time.
- `vllm:request_max_num_generation_tokens` - Max Generation Token in Sequence Group - `vllm:request_max_num_generation_tokens` - Max generation tokens in a sequence group.
See [the PR which added this Dashboard](gh-pr:2316) for interesting and useful background on the choices made here. See [the PR which added this Dashboard](gh-pr:2316) for interesting and useful background on the choices made here.
@@ -103,7 +103,7 @@ In v0, metrics are collected in the engine core process and we use multi-process
### Built in Python/Process Metrics ### Built in Python/Process Metrics
The following metrics are supported by default by `prometheus_client`, but the are not exposed with multiprocess mode is used: The following metrics are supported by default by `prometheus_client`, but they are not exposed when multi-process mode is used:
- `python_gc_objects_collected_total` - `python_gc_objects_collected_total`
- `python_gc_objects_uncollectable_total` - `python_gc_objects_uncollectable_total`
@@ -158,6 +158,7 @@ In v1, we wish to move computation and overhead out of the engine core
process to minimize the time between each forward pass. process to minimize the time between each forward pass.
The overall idea of V1 EngineCore design is: The overall idea of V1 EngineCore design is:
- EngineCore is the inner loop. Performance is most critical here - EngineCore is the inner loop. Performance is most critical here
- AsyncLLM is the outer loop. This is overlapped with GPU execution - AsyncLLM is the outer loop. This is overlapped with GPU execution
(ideally), so this is where any "overheads" should be if (ideally), so this is where any "overheads" should be if
@@ -178,7 +179,7 @@ time" (`time.time()`) to calculate intervals as the former is
unaffected by system clock changes (e.g. from NTP). unaffected by system clock changes (e.g. from NTP).
It's also important to note that monotonic clocks differ between It's also important to note that monotonic clocks differ between
processes - each process has its own reference. point. So it is processes - each process has its own reference point. So it is
meaningless to compare monotonic timestamps from different processes. meaningless to compare monotonic timestamps from different processes.
Therefore, in order to calculate an interval, we must compare two Therefore, in order to calculate an interval, we must compare two
@@ -343,14 +344,15 @@ vllm:time_to_first_token_seconds_bucket{le="0.1",model_name="meta-llama/Llama-3.
vllm:time_to_first_token_seconds_count{model_name="meta-llama/Llama-3.1-8B-Instruct"} 140.0 vllm:time_to_first_token_seconds_count{model_name="meta-llama/Llama-3.1-8B-Instruct"} 140.0
``` ```
Note - the choice of histogram buckets to be most useful to users !!! note
The choice of histogram buckets to be most useful to users
across a broad set of use cases is not straightforward and will across a broad set of use cases is not straightforward and will
require refinement over time. require refinement over time.
### Cache Config Info ### Cache Config Info
`prometheus_client` has support for [Info `prometheus_client` has support for
metrics](https://prometheus.github.io/client_python/instrumenting/info/) [Info metrics](https://prometheus.github.io/client_python/instrumenting/info/)
which are equivalent to a `Gauge` whose value is permanently set to 1, which are equivalent to a `Gauge` whose value is permanently set to 1,
but exposes interesting key/value pair information via labels. This is but exposes interesting key/value pair information via labels. This is
used for information about an instance that does not change - so it used for information about an instance that does not change - so it
@@ -363,14 +365,11 @@ We use this concept for the `vllm:cache_config_info` metric:
# HELP vllm:cache_config_info Information of the LLMEngine CacheConfig # HELP vllm:cache_config_info Information of the LLMEngine CacheConfig
# TYPE vllm:cache_config_info gauge # TYPE vllm:cache_config_info gauge
vllm:cache_config_info{block_size="16",cache_dtype="auto",calculate_kv_scales="False",cpu_offload_gb="0",enable_prefix_caching="False",gpu_memory_utilization="0.9",...} 1.0 vllm:cache_config_info{block_size="16",cache_dtype="auto",calculate_kv_scales="False",cpu_offload_gb="0",enable_prefix_caching="False",gpu_memory_utilization="0.9",...} 1.0
``` ```
However, `prometheus_client` has [never supported Info metrics in However, `prometheus_client` has
multiprocessing [never supported Info metrics in multiprocessing mode](https://github.com/prometheus/client_python/pull/300) -
mode](https://github.com/prometheus/client_python/pull/300) - for for [unclear reasons](gh-pr:7279#discussion_r1710417152). We
[unclear
reasons](gh-pr:7279#discussion_r1710417152). We
simply use a `Gauge` metric set to 1 and simply use a `Gauge` metric set to 1 and
`multiprocess_mode="mostrecent"` instead. `multiprocess_mode="mostrecent"` instead.
@@ -395,11 +394,9 @@ distinguish between per-adapter counts. This should be revisited.
Note that `multiprocess_mode="livemostrecent"` is used - the most Note that `multiprocess_mode="livemostrecent"` is used - the most
recent metric is used, but only from currently running processes. recent metric is used, but only from currently running processes.
This was added in This was added in <gh-pr:9477> and there is
<gh-pr:9477> and there is [at least one known user](https://github.com/kubernetes-sigs/gateway-api-inference-extension/pull/54).
[at least one known If we revisit this design and deprecate the old metric, we should reduce
user](https://github.com/kubernetes-sigs/gateway-api-inference-extension/pull/54). If
we revisit this design and deprecate the old metric, we should reduce
the need for a significant deprecation period by making the change in the need for a significant deprecation period by making the change in
v0 also and asking this project to move to the new metric. v0 also and asking this project to move to the new metric.
@@ -442,23 +439,20 @@ suddenly (from their perspective) when it is removed, even if there is
an equivalent metric for them to use. an equivalent metric for them to use.
As an example, see how `vllm:avg_prompt_throughput_toks_per_s` was As an example, see how `vllm:avg_prompt_throughput_toks_per_s` was
[deprecated](gh-pr:2764) (with a [deprecated](gh-pr:2764) (with a comment in the code),
comment in the code), [removed](gh-pr:12383), and then [noticed by a user](gh-issue:13218).
[removed](gh-pr:12383), and then
[noticed by a
user](gh-issue:13218).
In general: In general:
1) We should be cautious about deprecating metrics, especially since 1. We should be cautious about deprecating metrics, especially since
it can be hard to predict the user impact. it can be hard to predict the user impact.
2) We should include a prominent deprecation notice in the help string 2. We should include a prominent deprecation notice in the help string
that is included in the `/metrics' output. that is included in the `/metrics' output.
3) We should list deprecated metrics in user-facing documentation and 3. We should list deprecated metrics in user-facing documentation and
release notes. release notes.
4) We should consider hiding deprecated metrics behind a CLI argument 4. We should consider hiding deprecated metrics behind a CLI argument
in order to give administrators [an escape in order to give administrators
hatch](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/#show-hidden-metrics) [an escape hatch](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/#show-hidden-metrics)
for some time before deleting them. for some time before deleting them.
See the [deprecation policy](../../contributing/deprecation_policy.md) for See the [deprecation policy](../../contributing/deprecation_policy.md) for
@@ -474,7 +468,7 @@ removed.
The `vllm:time_in_queue_requests` Histogram metric was added by The `vllm:time_in_queue_requests` Histogram metric was added by
<gh-pr:9659> and its calculation is: <gh-pr:9659> and its calculation is:
``` ```python
self.metrics.first_scheduled_time = now self.metrics.first_scheduled_time = now
self.metrics.time_in_queue = now - self.metrics.arrival_time self.metrics.time_in_queue = now - self.metrics.arrival_time
``` ```
@@ -482,7 +476,7 @@ The `vllm:time_in_queue_requests` Histogram metric was added by
Two weeks later, <gh-pr:4464> added `vllm:request_queue_time_seconds` leaving Two weeks later, <gh-pr:4464> added `vllm:request_queue_time_seconds` leaving
us with: us with:
``` ```python
if seq_group.is_finished(): if seq_group.is_finished():
if (seq_group.metrics.first_scheduled_time is not None and if (seq_group.metrics.first_scheduled_time is not None and
seq_group.metrics.first_token_time is not None): seq_group.metrics.first_token_time is not None):
@@ -517,8 +511,7 @@ cache to complete other requests), we swap kv cache blocks out to CPU
memory. This is also known as "KV cache offloading" and is configured memory. This is also known as "KV cache offloading" and is configured
with `--swap-space` and `--preemption-mode`. with `--swap-space` and `--preemption-mode`.
In v0, [vLLM has long supported beam In v0, [vLLM has long supported beam search](gh-issue:6226). The
search](gh-issue:6226). The
SequenceGroup encapsulated the idea of N Sequences which SequenceGroup encapsulated the idea of N Sequences which
all shared the same prompt kv blocks. This enabled KV cache block all shared the same prompt kv blocks. This enabled KV cache block
sharing between requests, and copy-on-write to do branching. CPU sharing between requests, and copy-on-write to do branching. CPU
@@ -530,9 +523,8 @@ option than CPU swapping since blocks can be evicted slowly on demand
and the part of the prompt that was evicted can be recomputed. and the part of the prompt that was evicted can be recomputed.
SequenceGroup was removed in V1, although a replacement will be SequenceGroup was removed in V1, although a replacement will be
required for "parallel sampling" (`n>1`). [Beam search was moved out of required for "parallel sampling" (`n>1`).
the core (in [Beam search was moved out of the core (in V0)](gh-issue:8306). There was a
V0)](gh-issue:8306). There was a
lot of complex code for a very uncommon feature. lot of complex code for a very uncommon feature.
In V1, with prefix caching being better (zero over head) and therefore In V1, with prefix caching being better (zero over head) and therefore
@@ -547,7 +539,7 @@ Some v0 metrics are only relevant in the context of "parallel
sampling". This is where the `n` parameter in a request is used to sampling". This is where the `n` parameter in a request is used to
request multiple completions from the same prompt. request multiple completions from the same prompt.
As part of adding parallel sampling support in <gh-pr:10980> we should As part of adding parallel sampling support in <gh-pr:10980>, we should
also add these metrics. also add these metrics.
- `vllm:request_params_n` (Histogram) - `vllm:request_params_n` (Histogram)
@@ -576,7 +568,8 @@ There is a PR under review (<gh-pr:12193>) to add "prompt lookup (ngram)"
seculative decoding to v1. Other techniques will follow. We should seculative decoding to v1. Other techniques will follow. We should
revisit the v0 metrics in this context. revisit the v0 metrics in this context.
Note - we should probably expose acceptance rate as separate accepted !!! note
We should probably expose acceptance rate as separate accepted
and draft counters, like we do for prefix caching hit rate. Efficiency and draft counters, like we do for prefix caching hit rate. Efficiency
likely also needs similar treatment. likely also needs similar treatment.
@@ -585,17 +578,13 @@ likely also needs similar treatment.
A common use case for our metrics is to support automated scaling of A common use case for our metrics is to support automated scaling of
vLLM instances. vLLM instances.
For related discussion from the [Kubernetes Serving Working For related discussion from the
Group](https://github.com/kubernetes/community/tree/master/wg-serving), [Kubernetes Serving Working Group](https://github.com/kubernetes/community/tree/master/wg-serving),
see: see:
- [Standardizing Large Model Server Metrics in - [Standardizing Large Model Server Metrics in Kubernetes](https://docs.google.com/document/d/1SpSp1E6moa4HSrJnS4x3NpLuj88sMXr2tbofKlzTZpk)
Kubernetes](https://docs.google.com/document/d/1SpSp1E6moa4HSrJnS4x3NpLuj88sMXr2tbofKlzTZpk) - [Benchmarking LLM Workloads for Performance Evaluation and Autoscaling in Kubernetes](https://docs.google.com/document/d/1k4Q4X14hW4vftElIuYGDu5KDe2LtV1XammoG-Xi3bbQ)
- [Benchmarking LLM Workloads for Performance Evaluation and - [Inference Perf](https://github.com/kubernetes-sigs/wg-serving/tree/main/proposals/013-inference-perf)
Autoscaling in
Kubernetes](https://docs.google.com/document/d/1k4Q4X14hW4vftElIuYGDu5KDe2LtV1XammoG-Xi3bbQ)
- [Inference
Perf](https://github.com/kubernetes-sigs/wg-serving/tree/main/proposals/013-inference-perf)
- <gh-issue:5041> and <gh-pr:12726>. - <gh-issue:5041> and <gh-pr:12726>.
This is a non-trivial topic. Consider this comment from Rob: This is a non-trivial topic. Consider this comment from Rob:
@@ -619,19 +608,16 @@ should judge an instance as approaching saturation:
Our approach to naming metrics probably deserves to be revisited: Our approach to naming metrics probably deserves to be revisited:
1. The use of colons in metric names seems contrary to ["colons are 1. The use of colons in metric names seems contrary to
reserved for user defined recording ["colons are reserved for user defined recording rules"](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
rules"](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)
2. Most of our metrics follow the convention of ending with units, but 2. Most of our metrics follow the convention of ending with units, but
not all do. not all do.
3. Some of our metric names end with `_total`: 3. Some of our metric names end with `_total`:
```
If there is a suffix of `_total` on the metric name, it will be removed. When If there is a suffix of `_total` on the metric name, it will be removed. When
exposing the time series for counter, a `_total` suffix will be added. This is exposing the time series for counter, a `_total` suffix will be added. This is
for compatibility between OpenMetrics and the Prometheus text format, as OpenMetrics for compatibility between OpenMetrics and the Prometheus text format, as OpenMetrics
requires the `_total` suffix. requires the `_total` suffix.
```
### Adding More Metrics ### Adding More Metrics
@@ -642,8 +628,7 @@ There is no shortage of ideas for new metrics:
- Proposals arising from specific use cases, like the Kubernetes - Proposals arising from specific use cases, like the Kubernetes
auto-scaling topic above auto-scaling topic above
- Proposals that might arise out of standardisation efforts like - Proposals that might arise out of standardisation efforts like
[OpenTelemetry Semantic Conventions for Gen [OpenTelemetry Semantic Conventions for Gen AI](https://github.com/open-telemetry/semantic-conventions/tree/main/docs/gen-ai).
AI](https://github.com/open-telemetry/semantic-conventions/tree/main/docs/gen-ai).
We should be cautious in our approach to adding new metrics. While We should be cautious in our approach to adding new metrics. While
metrics are often relatively straightforward to add: metrics are often relatively straightforward to add:
@@ -668,18 +653,14 @@ fall under the more general heading of "Observability".
v0 has support for OpenTelemetry tracing: v0 has support for OpenTelemetry tracing:
- Added by <gh-pr:4687> - Added by <gh-pr:4687>
- Configured with `--oltp-traces-endpoint` and - Configured with `--oltp-traces-endpoint` and `--collect-detailed-traces`
`--collect-detailed-traces` - [OpenTelemetry blog post](https://opentelemetry.io/blog/2024/llm-observability/)
- [OpenTelemetry blog
post](https://opentelemetry.io/blog/2024/llm-observability/)
- [User-facing docs](../../examples/online_serving/opentelemetry.md) - [User-facing docs](../../examples/online_serving/opentelemetry.md)
- [Blog - [Blog post](https://medium.com/@ronen.schaffer/follow-the-trail-supercharging-vllm-with-opentelemetry-distributed-tracing-aa655229b46f)
post](https://medium.com/@ronen.schaffer/follow-the-trail-supercharging-vllm-with-opentelemetry-distributed-tracing-aa655229b46f) - [IBM product docs](https://www.ibm.com/docs/en/instana-observability/current?topic=mgaa-monitoring-large-language-models-llms-vllm-public-preview)
- [IBM product
docs](https://www.ibm.com/docs/en/instana-observability/current?topic=mgaa-monitoring-large-language-models-llms-vllm-public-preview)
OpenTelemetry has a [Gen AI Working OpenTelemetry has a
Group](https://github.com/open-telemetry/community/blob/main/projects/gen-ai.md). [Gen AI Working Group](https://github.com/open-telemetry/community/blob/main/projects/gen-ai.md).
Since metrics is a big enough topic on its own, we are going to tackle Since metrics is a big enough topic on its own, we are going to tackle
the topic of tracing in v1 separately. the topic of tracing in v1 separately.
@@ -698,7 +679,7 @@ These metrics are only enabled when OpenTelemetry tracing is enabled
and if `--collect-detailed-traces=all/model/worker` is used. The and if `--collect-detailed-traces=all/model/worker` is used. The
documentation for this option states: documentation for this option states:
> collect detailed traces for the specified "modules. This involves > collect detailed traces for the specified modules. This involves
> use of possibly costly and or blocking operations and hence might > use of possibly costly and or blocking operations and hence might
> have a performance impact. > have a performance impact.