Merge similar examples in offline_inference into single basic example (#12737)

This commit is contained in:
Harry Mellor
2025-02-20 12:53:51 +00:00
committed by GitHub
parent b69692a2d8
commit 992e5c3d34
29 changed files with 394 additions and 437 deletions

View File

@@ -40,7 +40,7 @@ For non-CUDA platforms, please refer [here](#installation-index) for specific in
## Offline Batched Inference
With vLLM installed, you can start generating texts for list of input prompts (i.e. offline batch inferencing). See the example script: <gh-file:examples/offline_inference/basic.py>
With vLLM installed, you can start generating texts for list of input prompts (i.e. offline batch inferencing). See the example script: <gh-file:examples/offline_inference/basic/basic.py>
The first line of this example imports the classes {class}`~vllm.LLM` and {class}`~vllm.SamplingParams`: