Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
8d9babd4dea934fdd47b5a20a73ef0e04ff0e22e
vllm/tests/models/multimodal
History
Andreas Karatzas cef65f0715 [ROCm][CI] Removed hard-coded attn backend requirement for Qwen VL (#34753)
Signed-off-by: Andreas Karatzas <akaratza@amd.com>
2026-02-18 03:59:53 +00:00
..
generation
[ROCm][CI] Removed hard-coded attn backend requirement for Qwen VL (#34753)
2026-02-18 03:59:53 +00:00
pooling
[Misc] Update tests and examples for Prithvi/Terratorch models (#34416)
2026-02-13 23:03:51 -08:00
processing
[Renderer] Move InputPreprocessor into Renderer (2/2) (#34560)
2026-02-17 05:29:01 -08:00
__init__.py
[CI/Build] Move model-specific multi-modal processing tests (#11934)
2025-01-11 13:50:05 +08:00
conftest.py
[ROCm][CI] Fix HuggingFace flash_attention_2 accuracy issue in Isaac vision encoder (#32233)
2026-01-12 22:33:59 -08:00
test_mapping.py
[Bugfix] Fix models and tests for transformers v5 (#33977)
2026-02-06 21:47:41 +08:00
Powered by Gitea Version: 1.25.2 Page: 579ms Template: 8ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API