Merge similar examples in offline_inference into single basic example (#12737)

This commit is contained in:
Harry Mellor
2025-02-20 12:53:51 +00:00
committed by GitHub
parent b69692a2d8
commit 992e5c3d34
29 changed files with 394 additions and 437 deletions

View File

@@ -46,7 +46,7 @@ for output in outputs:
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
A code example can be found here: <gh-file:examples/offline_inference/basic.py>
A code example can be found here: <gh-file:examples/offline_inference/basic/basic.py>
### `LLM.beam_search`
@@ -103,7 +103,7 @@ for output in outputs:
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
A code example can be found here: <gh-file:examples/offline_inference/chat.py>
A code example can be found here: <gh-file:examples/offline_inference/basic/chat.py>
If the model doesn't have a chat template or you want to specify another one,
you can explicitly pass a chat template: