This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
9c3fe9936b929b5503d780bd4e8e3cd524de1c4e
vllm
/
tests
/
models
/
multimodal
History
Jakub Zakrzewski
111d869069
[Model] Add nvidia/llama-nemotron-embed-vl-1b-v2 multimodal embedding model (
#35297
)
...
Signed-off-by: Jakub Zakrzewski <
jzakrzewski@nvidia.com
>
2026-02-26 14:17:17 +00:00
..
generation
[Llama4,CI] Bring back Llama-4 bug fixes, and also fix Maverick tests (
#35033
)
2026-02-23 09:04:34 -05:00
pooling
[Model] Add nvidia/llama-nemotron-embed-vl-1b-v2 multimodal embedding model (
#35297
)
2026-02-26 14:17:17 +00:00
processing
[Misc] Standardize handling of
mm_processor_kwargs.size
(
#35284
)
2026-02-26 13:05:46 +00:00
__init__.py
[CI/Build] Move model-specific multi-modal processing tests (
#11934
)
2025-01-11 13:50:05 +08:00
conftest.py
[ROCm][CI] Disable skinny GEMMs in multimodal tests to fix non-deterministic results (
#35049
)
2026-02-25 16:48:37 +00:00
test_mapping.py
[Bugfix] Fix models and tests for transformers v5 (
#33977
)
2026-02-06 21:47:41 +08:00