[V1][VLM][Pixtral-HF] Support Pixtral-HF on V1 (#14275)

Signed-off-by: Linkun Chen <github@lkchen.net>
This commit is contained in:
lkchen
2025-03-06 00:58:41 -08:00
committed by GitHub
parent 1769928079
commit 5d802522a7
4 changed files with 175 additions and 16 deletions

View File

@@ -866,7 +866,7 @@ See [this page](#generative-models) for more information on how to use generativ
- * `PixtralForConditionalGeneration`
* Pixtral
* T + I<sup>+</sup>
* `mistralai/Pixtral-12B-2409`, `mistral-community/pixtral-12b` (see note), etc.
* `mistralai/Pixtral-12B-2409`, `mistral-community/pixtral-12b`, etc.
*
* ✅︎
* ✅︎
@@ -930,10 +930,6 @@ For more details, please see: <gh-pr:4087#issuecomment-2250397630>
Currently the PaliGemma model series is implemented without PrefixLM attention mask. This model series may be deprecated in a future release.
:::
:::{note}
`mistral-community/pixtral-12b` does not support V1 yet.
:::
:::{note}
To use Qwen2.5-VL series models, you have to install Hugging Face Transformers library from source via `pip install git+https://github.com/huggingface/transformers`.
:::