This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
fa6a6be51978bd4b49ba0da17039e60f96dc5b13
vllm
/
tests
/
models
/
multimodal
History
Yueqian Lin
e8249378e4
[Bugfix] Fix check_interleaved_audio_video false positive for batched non-interleaved requests (
#35487
)
...
Signed-off-by: linyueqian <
linyueqian@outlook.com
> Co-authored-by: Roger Wang <
hey@rogerw.io
>
2026-02-27 06:48:25 -08:00
..
generation
[Llama4,CI] Bring back Llama-4 bug fixes, and also fix Maverick tests (
#35033
)
2026-02-23 09:04:34 -05:00
pooling
[Model] Add nvidia/llama-nemotron-embed-vl-1b-v2 multimodal embedding model (
#35297
)
2026-02-26 14:17:17 +00:00
processing
[Bugfix] Fix check_interleaved_audio_video false positive for batched non-interleaved requests (
#35487
)
2026-02-27 06:48:25 -08:00
__init__.py
[CI/Build] Move model-specific multi-modal processing tests (
#11934
)
2025-01-11 13:50:05 +08:00
conftest.py
[ROCm][CI] Disable skinny GEMMs in multimodal tests to fix non-deterministic results (
#35049
)
2026-02-25 16:48:37 +00:00
test_mapping.py
[Bugfix] Fix models and tests for transformers v5 (
#33977
)
2026-02-06 21:47:41 +08:00