[Doc] Fix: Correct vLLM announcing blog post link in docs (#31868)

Signed-off-by: enfinity <festusowumi@gmail.com>
This commit is contained in:
Festus Ayobami Owumi
2026-01-07 18:06:42 +00:00
committed by GitHub
parent bf184a6621
commit 05f47bd8d2

View File

@@ -62,7 +62,7 @@ vLLM is flexible and easy to use with:
For more information, check out the following:
- [vLLM announcing blog post](https://vllm.ai) (intro to PagedAttention)
- [vLLM announcing blog post](https://blog.vllm.ai/2023/06/20/vllm.html) (intro to PagedAttention)
- [vLLM paper](https://arxiv.org/abs/2309.06180) (SOSP 2023)
- [How continuous batching enables 23x throughput in LLM inference while reducing p50 latency](https://www.anyscale.com/blog/continuous-batching-llm-inference) by Cade Daniel et al.
- [vLLM Meetups](community/meetups.md)