Replace "online inference" with "online serving" (#11923)
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
@@ -5,7 +5,7 @@
|
||||
This guide will help you quickly get started with vLLM to perform:
|
||||
|
||||
- [Offline batched inference](#quickstart-offline)
|
||||
- [Online inference using OpenAI-compatible server](#quickstart-online)
|
||||
- [Online serving using OpenAI-compatible server](#quickstart-online)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
||||
Reference in New Issue
Block a user