This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
75f01b9d3c3a40e52e2fa4a2c9efc92cf45a88fc
vllm
/
vllm
/
model_executor
History
Thomas Parnell
e0c910bb89
[Hybrid] [Kernel] Fix chunk scan kernel when BLOCK_SIZE_DSTATE > 128 (
#28295
)
...
Signed-off-by: Thomas Parnell <
tpa@zurich.ibm.com
>
2025-11-14 22:55:42 +00:00
..
layers
[Hybrid] [Kernel] Fix chunk scan kernel when BLOCK_SIZE_DSTATE > 128 (
#28295
)
2025-11-14 22:55:42 +00:00
model_loader
Skip models that cannot currently init on Transformers v5 (
#28471
)
2025-11-12 23:43:57 +00:00
models
[Bugfix] resolve Qwen3-VL GPTQModel quantized model loading failure (
#28663
)
2025-11-14 18:44:27 +00:00
warmup
[Core] Encoder separation for Encode-Prefill-Decode Disaggregation (
#25233
)
2025-11-11 18:58:33 -08:00
__init__.py
Convert formatting to use
ruff
instead of
yapf
+
isort
(
#26247
)
2025-10-05 07:06:22 -07:00
custom_op.py
[FrontEnd] UNREVERT CompilationConfig overhaul (
#20283
): deprecate use_inductor in favor of backend, simplify custom_ops (
#26502
)
2025-10-13 22:47:16 +00:00
parameter.py
[Docs] Replace
rst
style double-backtick with
md
single-backtick (
#27091
)
2025-10-17 02:47:34 -07:00
utils.py
[Chore] Clean up pytorch helper functions in
vllm.utils
(
#26908
)
2025-10-18 09:48:22 -07:00