This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
dd20ee4e3e873364bd79983dcbb30d2189c96507
vllm
/
tests
/
models
/
multimodal
History
Andreas Karatzas
5a4a179591
[ROCm][CI] Fix granite_speech test for gfx90a by selecting compatible attention backend (
#37611
)
...
Signed-off-by: Andreas Karatzas <
akaratza@amd.com
>
2026-03-20 17:07:26 +08:00
..
generation
[ROCm][CI] Fix granite_speech test for gfx90a by selecting compatible attention backend (
#37611
)
2026-03-20 17:07:26 +08:00
pooling
[Model] Add ColQwen3.5 4.5B support (
#36887
)
2026-03-17 21:17:02 +00:00
processing
[Model] Remove unused
handle_oov_mm_token
(
#37321
)
2026-03-17 19:44:52 +00:00
__init__.py
[CI/Build] Move model-specific multi-modal processing tests (
#11934
)
2025-01-11 13:50:05 +08:00
conftest.py
[ROCm][CI] Disable skinny GEMMs in multimodal tests to fix non-deterministic results (
#35049
)
2026-02-25 16:48:37 +00:00
test_mapping.py
Use Transformers v5
WeightRenaming
for Transformers modeling backend (
#31545
)
2026-03-13 20:49:08 +00:00