[Doc] Move examples into categories (#11840)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-01-08 13:09:53 +00:00
committed by GitHub
parent 2a0596bc48
commit aba8d6ee00
116 changed files with 153 additions and 124 deletions

View File

@@ -60,7 +60,7 @@ for o in outputs:
print(generated_text)
```
Full example: <gh-file:examples/offline_inference_vision_language.py>
Full example: <gh-file:examples/offline_inference/offline_inference_vision_language.py>
To substitute multiple images inside the same text prompt, you can pass in a list of images instead:
@@ -91,7 +91,7 @@ for o in outputs:
print(generated_text)
```
Full example: <gh-file:examples/offline_inference_vision_language_multi_image.py>
Full example: <gh-file:examples/offline_inference/offline_inference_vision_language_multi_image.py>
Multi-image input can be extended to perform video captioning. We show this with [Qwen2-VL](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) as it supports videos:
@@ -125,13 +125,13 @@ for o in outputs:
You can pass a list of NumPy arrays directly to the `'video'` field of the multi-modal dictionary
instead of using multi-image input.
Full example: <gh-file:examples/offline_inference_vision_language.py>
Full example: <gh-file:examples/offline_inference/offline_inference_vision_language.py>
### Audio
You can pass a tuple `(array, sampling_rate)` to the `'audio'` field of the multi-modal dictionary.
Full example: <gh-file:examples/offline_inference_audio_language.py>
Full example: <gh-file:examples/offline_inference/offline_inference_audio_language.py>
### Embedding
@@ -271,7 +271,7 @@ chat_response = client.chat.completions.create(
print("Chat completion output:", chat_response.choices[0].message.content)
```
Full example: <gh-file:examples/openai_chat_completion_client_for_multimodal.py>
Full example: <gh-file:examples/online_serving/openai_chat_completion_client_for_multimodal.py>
```{tip}
Loading from local file paths is also supported on vLLM: You can specify the allowed local media path via `--allowed-local-media-path` when launching the API server/engine,
@@ -342,7 +342,7 @@ result = chat_completion_from_url.choices[0].message.content
print("Chat completion output from image url:", result)
```
Full example: <gh-file:examples/openai_chat_completion_client_for_multimodal.py>
Full example: <gh-file:examples/online_serving/openai_chat_completion_client_for_multimodal.py>
````{note}
By default, the timeout for fetching videos through HTTP URL is `30` seconds.
@@ -445,7 +445,7 @@ result = chat_completion_from_url.choices[0].message.content
print("Chat completion output from audio url:", result)
```
Full example: <gh-file:examples/openai_chat_completion_client_for_multimodal.py>
Full example: <gh-file:examples/online_serving/openai_chat_completion_client_for_multimodal.py>
````{note}
By default, the timeout for fetching audios through HTTP URL is `10` seconds.
@@ -529,4 +529,4 @@ Also important, `MrLight/dse-qwen2-2b-mrl-v1` requires a placeholder image of th
example below for details.
```
Full example: <gh-file:examples/openai_chat_embedding_client_for_multimodal.py>
Full example: <gh-file:examples/online_serving/openai_chat_embedding_client_for_multimodal.py>