[Doc]: update paths for Offline/Online/Others example sections (#33494)
Signed-off-by: Sawyer Bowerman <sbowerma@redhat.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This commit is contained in:
@@ -2,6 +2,6 @@
|
||||
|
||||
vLLM's examples are split into three categories:
|
||||
|
||||
- If you are using vLLM from within Python code, see the *Offline Inference* section.
|
||||
- If you are using vLLM from an HTTP application or client, see the *Online Serving* section.
|
||||
- For examples of using some of vLLM's advanced features (e.g. LMCache or Tensorizer) which are not specific to either of the above use cases, see the *Others* section.
|
||||
- If you are using vLLM from within Python code, see the [Offline Inference](../../examples/offline_inference) section.
|
||||
- If you are using vLLM from an HTTP application or client, see the [Online Serving](../../examples/online_serving) section.
|
||||
- For examples of using some of vLLM's advanced features (e.g. LMCache or Tensorizer) which are not specific to either of the above use cases, see the [Others](../../examples/others) section.
|
||||
|
||||
Reference in New Issue
Block a user