rename PromptInputs and inputs with backward compatibility (#8760)

This commit is contained in:
Cyrus Leung
2024-09-26 00:36:47 +08:00
committed by GitHub
parent 0c4d2ad5e6
commit 28e1299e60
21 changed files with 438 additions and 245 deletions

View File

@@ -8,7 +8,7 @@ Multi-Modality
vLLM provides experimental support for multi-modal models through the :mod:`vllm.multimodal` package.
Multi-modal inputs can be passed alongside text and token prompts to :ref:`supported models <supported_vlms>`
via the ``multi_modal_data`` field in :class:`vllm.inputs.PromptInputs`.
via the ``multi_modal_data`` field in :class:`vllm.inputs.PromptType`.
Currently, vLLM only has built-in support for image data. You can extend vLLM to process additional modalities
by following :ref:`this guide <adding_multimodal_plugin>`.

View File

@@ -1,7 +1,7 @@
LLM Inputs
==========
.. autodata:: vllm.inputs.PromptInputs
.. autodata:: vllm.inputs.PromptType
.. autoclass:: vllm.inputs.TextPrompt
:show-inheritance:

View File

@@ -27,7 +27,7 @@ The :class:`~vllm.LLM` class can be instantiated in much the same way as languag
We have removed all vision language related CLI args in the ``0.5.1`` release. **This is a breaking change**, so please update your code to follow
the above snippet. Specifically, ``image_feature_size`` can no longer be specified as we now calculate that internally for each model.
To pass an image to the model, note the following in :class:`vllm.inputs.PromptInputs`:
To pass an image to the model, note the following in :class:`vllm.inputs.PromptType`:
* ``prompt``: The prompt should follow the format that is documented on HuggingFace.
* ``multi_modal_data``: This is a dictionary that follows the schema defined in :class:`vllm.multimodal.MultiModalDataDict`.