Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
4429d934de3c5cc327b0d7aec8e473aeba38db90
vllm/tests/models/multimodal
History
Shanshan Shen 87b4d1557d [CustomOp][MM] Extract MMEncoderAttention as CustomOp and replace the backend of QwenVisionAttention with it. (#30125)
Signed-off-by: shen-shanshan <467638484@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>
2025-12-15 11:13:32 +08:00
..
generation
[CustomOp][MM] Extract MMEncoderAttention as CustomOp and replace the backend of QwenVisionAttention with it. (#30125)
2025-12-15 11:13:32 +08:00
pooling
Support tokenization_kwargs override (#29794)
2025-12-06 09:12:53 +00:00
processing
Add AudioFlamingo3 model support (#30539)
2025-12-14 02:14:55 -08:00
__init__.py
[CI/Build] Move model-specific multi-modal processing tests (#11934)
2025-01-11 13:50:05 +08:00
test_mapping.py
Revert "[Renderer] Separate out RendererConfig from ModelConfig (#30145)" (#30199)
2025-12-07 00:00:22 -08:00
Powered by Gitea Version: 1.25.2 Page: 380ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API