This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
f0d525171557e3fe74e8e6df52257f9d66831d3f
vllm
/
tests
/
model_executor
/
model_loader
History
Micah Williamson
6c64c41b4a
[ROCm][CI] Force max_num_seqs=1 on ROCm In test_sharded_state_loader to reduce flakiness (
#33277
)
...
Signed-off-by: Micah Williamson <
micah.williamson@amd.com
>
2026-01-31 12:28:29 +08:00
..
fastsafetensors_loader
[BugFix] [FEAT] Enable fastsafetensors for ROCm platform (
#28225
)
2025-11-20 16:34:11 +00:00
runai_streamer_loader
[Chore] Try remove
init_cached_hf_modules
(
#31786
)
2026-01-07 12:34:04 +08:00
tensorizer_loader
[Chore] Try remove
init_cached_hf_modules
(
#31786
)
2026-01-07 12:34:04 +08:00
__init__.py
[Core] Support model loader plugins (
#21067
)
2025-07-24 01:49:44 -07:00
test_registry.py
Convert formatting to use
ruff
instead of
yapf
+
isort
(
#26247
)
2025-10-05 07:06:22 -07:00
test_reload.py
[QeRL] Layerwise Reloading (
#32133
)
2026-01-30 08:50:05 -07:00
test_sharded_state_loader.py
[ROCm][CI] Force max_num_seqs=1 on ROCm In test_sharded_state_loader to reduce flakiness (
#33277
)
2026-01-31 12:28:29 +08:00