[Doc] Add MTP docs and update speculative decoding guidance (#35197)
Signed-off-by: liuxing <945764858@qq.com>
This commit is contained in:
@@ -6,14 +6,33 @@ To train your own draft models for optimized speculative decoding, see [vllm-pro
|
||||
|
||||
## vLLM Speculation Methods
|
||||
|
||||
vLLM supports a variety of methods of speculative decoding. Model-based methods such as EAGLE, draft models, and mlp provide the best latency reduction, while simpler methods such as n-gram and and suffix decoding provide modest speedups without increasing workload during peak traffic.
|
||||
vLLM supports a variety of methods of speculative decoding. Model-based methods such as EAGLE, MTP, draft models, and MLP provide the best latency reduction, while simpler methods such as n-gram and suffix decoding provide modest speedups without increasing workload during peak traffic.
|
||||
|
||||
- [EAGLE](eagle.md)
|
||||
- [Multi-Token Prediction (MTP)](mtp.md)
|
||||
- [Draft Model](draft_model.md)
|
||||
- [Multi-Layer Perceptron](mlp.md)
|
||||
- [N-Gram](n_gram.md)
|
||||
- [Suffix Decoding](suffix.md)
|
||||
|
||||
## Method Selection at a Glance
|
||||
|
||||
Use this qualitative table as a starting point for method selection. Real gains
|
||||
depend on your model family, traffic pattern, hardware, and sampling settings.
|
||||
|
||||
| Method | Low QPS (latency focused) | High QPS (throughput focused) | Notes |
|
||||
| --- | --- | --- | --- |
|
||||
| EAGLE | High gain | Medium to high gain | Strong general-purpose model-based method. |
|
||||
| MTP | High gain | Medium to high gain | Best when the target model has native MTP support. |
|
||||
| Draft model | High gain | Medium gain | Needs a separate draft model. |
|
||||
| MLP speculator | Medium to high gain | Medium gain | Good when compatible MLP speculators are available. |
|
||||
| N-gram | Low to medium gain | Medium gain | Lightweight and easy to enable. |
|
||||
| Suffix decoding | Low to medium gain | Medium gain | No extra draft model; dynamic speculation depth. |
|
||||
|
||||
For reproducible measurements in your environment, use
|
||||
[`examples/offline_inference/spec_decode.py`](../../../examples/offline_inference/spec_decode.py)
|
||||
or the [benchmark CLI guide](../../benchmarking/cli.md).
|
||||
|
||||
## Lossless guarantees of Speculative Decoding
|
||||
|
||||
In vLLM, speculative decoding aims to enhance inference efficiency while maintaining accuracy. This section addresses the lossless guarantees of
|
||||
|
||||
Reference in New Issue
Block a user