[Doc] Rename offline inference examples (#11927)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-01-10 15:50:29 +00:00
committed by GitHub
parent 20410b2fda
commit 482cdc494e
46 changed files with 46 additions and 46 deletions

View File

@@ -60,7 +60,7 @@ for o in outputs:
print(generated_text)
```
Full example: <gh-file:examples/offline_inference/offline_inference_vision_language.py>
Full example: <gh-file:examples/offline_inference/vision_language.py>
To substitute multiple images inside the same text prompt, you can pass in a list of images instead:
@@ -91,7 +91,7 @@ for o in outputs:
print(generated_text)
```
Full example: <gh-file:examples/offline_inference/offline_inference_vision_language_multi_image.py>
Full example: <gh-file:examples/offline_inference/vision_language_multi_image.py>
Multi-image input can be extended to perform video captioning. We show this with [Qwen2-VL](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) as it supports videos:
@@ -125,13 +125,13 @@ for o in outputs:
You can pass a list of NumPy arrays directly to the `'video'` field of the multi-modal dictionary
instead of using multi-image input.
Full example: <gh-file:examples/offline_inference/offline_inference_vision_language.py>
Full example: <gh-file:examples/offline_inference/vision_language.py>
### Audio
You can pass a tuple `(array, sampling_rate)` to the `'audio'` field of the multi-modal dictionary.
Full example: <gh-file:examples/offline_inference/offline_inference_audio_language.py>
Full example: <gh-file:examples/offline_inference/audio_language.py>
### Embedding