diff --git a/docs/README.md b/docs/README.md index 0c279c19f..4b480c463 100644 --- a/docs/README.md +++ b/docs/README.md @@ -62,7 +62,7 @@ vLLM is flexible and easy to use with: For more information, check out the following: -- [vLLM announcing blog post](https://vllm.ai) (intro to PagedAttention) +- [vLLM announcing blog post](https://blog.vllm.ai/2023/06/20/vllm.html) (intro to PagedAttention) - [vLLM paper](https://arxiv.org/abs/2309.06180) (SOSP 2023) - [How continuous batching enables 23x throughput in LLM inference while reducing p50 latency](https://www.anyscale.com/blog/continuous-batching-llm-inference) by Cade Daniel et al. - [vLLM Meetups](community/meetups.md)