[Docs] Convert rST to MyST (Markdown) (#11145)

Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
This commit is contained in:
Rafael Vasquez
2024-12-23 17:35:38 -05:00
committed by GitHub
parent 94d545a1a1
commit 32aa2059ad
167 changed files with 7863 additions and 8131 deletions

View File

@@ -0,0 +1,83 @@
(multi-modality)=
# Multi-Modality
```{eval-rst}
.. currentmodule:: vllm.multimodal
```
vLLM provides experimental support for multi-modal models through the {mod}`vllm.multimodal` package.
Multi-modal inputs can be passed alongside text and token prompts to [supported models](#supported-mm-models)
via the `multi_modal_data` field in {class}`vllm.inputs.PromptType`.
Currently, vLLM only has built-in support for image data. You can extend vLLM to process additional modalities
by following [this guide](#adding-multimodal-plugin).
Looking to add your own multi-modal model? Please follow the instructions listed [here](#enabling-multimodal-inputs).
## Guides
```{toctree}
:maxdepth: 1
adding_multimodal_plugin
```
## Module Contents
```{eval-rst}
.. automodule:: vllm.multimodal
```
### Registry
```{eval-rst}
.. autodata:: vllm.multimodal.MULTIMODAL_REGISTRY
```
```{eval-rst}
.. autoclass:: vllm.multimodal.MultiModalRegistry
:members:
:show-inheritance:
```
### Base Classes
```{eval-rst}
.. autodata:: vllm.multimodal.NestedTensors
```
```{eval-rst}
.. autodata:: vllm.multimodal.BatchedTensorInputs
```
```{eval-rst}
.. autoclass:: vllm.multimodal.MultiModalDataBuiltins
:members:
:show-inheritance:
```
```{eval-rst}
.. autodata:: vllm.multimodal.MultiModalDataDict
```
```{eval-rst}
.. autoclass:: vllm.multimodal.MultiModalKwargs
:members:
:show-inheritance:
```
```{eval-rst}
.. autoclass:: vllm.multimodal.MultiModalPlugin
:members:
:show-inheritance:
```
### Image Classes
```{eval-rst}
.. automodule:: vllm.multimodal.image
:members:
:show-inheritance:
```