[Doc] [1/N] Initial guide for merged multi-modal processor (#11925)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
This commit is contained in:
Cyrus Leung
2025-01-10 22:30:25 +08:00
committed by GitHub
parent 241ad7b301
commit 12664ddda5
19 changed files with 433 additions and 198 deletions

View File

@@ -7,7 +7,7 @@ vLLM provides experimental support for multi-modal models through the {mod}`vllm
Multi-modal inputs can be passed alongside text and token prompts to [supported models](#supported-mm-models)
via the `multi_modal_data` field in {class}`vllm.inputs.PromptType`.
Looking to add your own multi-modal model? Please follow the instructions listed [here](#enabling-multimodal-inputs).
Looking to add your own multi-modal model? Please follow the instructions listed [here](#supports-multimodal).
## Module Contents