This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
c8b678e53e37b24aa457502e2e47a650b27fd0ec
vllm
/
tests
/
models
/
multimodal
History
Jakub Zakrzewski
c8b678e53e
[Model] Add support for nvidia/llama-nemotron-rerank-vl-1b-v2 (
#35735
)
...
Signed-off-by: Jakub Zakrzewski <
jzakrzewski@nvidia.com
>
2026-03-03 08:32:14 +08:00
..
generation
[Llama4,CI] Bring back Llama-4 bug fixes, and also fix Maverick tests (
#35033
)
2026-02-23 09:04:34 -05:00
pooling
[Model] Add support for nvidia/llama-nemotron-rerank-vl-1b-v2 (
#35735
)
2026-03-03 08:32:14 +08:00
processing
[Bugfix] Fix MM processor test for Qwen3.5 (
#35797
)
2026-03-02 23:05:08 +00:00
__init__.py
[CI/Build] Move model-specific multi-modal processing tests (
#11934
)
2025-01-11 13:50:05 +08:00
conftest.py
[ROCm][CI] Disable skinny GEMMs in multimodal tests to fix non-deterministic results (
#35049
)
2026-02-25 16:48:37 +00:00
test_mapping.py
[Bugfix] Fix models and tests for transformers v5 (
#33977
)
2026-02-06 21:47:41 +08:00