[Doc] Rename offline inference examples (#11927)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-01-10 15:50:29 +00:00
committed by GitHub
parent 20410b2fda
commit 482cdc494e
46 changed files with 46 additions and 46 deletions

View File

@@ -46,7 +46,7 @@ for output in outputs:
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
A code example can be found here: <gh-file:examples/offline_inference/offline_inference.py>
A code example can be found here: <gh-file:examples/offline_inference/basic.py>
### `LLM.beam_search`
@@ -103,7 +103,7 @@ for output in outputs:
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
A code example can be found here: <gh-file:examples/offline_inference/offline_inference_chat.py>
A code example can be found here: <gh-file:examples/offline_inference/chat.py>
If the model doesn't have a chat template or you want to specify another one,
you can explicitly pass a chat template: