This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
48d5ca4e8b8b66dd0e734821d57dfc0eefaad4d2
vllm
/
vllm
/
model_executor
History
Mamy Ratsimbazafy
b9793e6a8c
Add Fused MoE Triton kernels for GLM-4.5-Air, GLM-4.5v, GLM-4.6v on 2x RTX Pro 6000 (
#31407
)
...
Signed-off-by: Mamy Ratsimbazafy <
mamy_github@numforge.co
>
2025-12-28 08:38:33 -08:00
..
layers
Add Fused MoE Triton kernels for GLM-4.5-Air, GLM-4.5v, GLM-4.6v on 2x RTX Pro 6000 (
#31407
)
2025-12-28 08:38:33 -08:00
model_loader
[Chore] Remove unused
noqa
s (
#31263
)
2025-12-24 05:38:46 -08:00
models
[Core] Initialize LoRA support for tower and connector in multi-modal models (
#26674
)
2025-12-26 04:48:20 -08:00
warmup
[UX] Reduce DeepGEMM warmup log output to single progress bar (
#30903
)
2025-12-17 20:21:51 -08:00
__init__.py
Convert formatting to use
ruff
instead of
yapf
+
isort
(
#26247
)
2025-10-05 07:06:22 -07:00
custom_op.py
[CustomOp] Support object-level enable for CustomOp (
#30547
)
2025-12-15 11:02:09 +08:00
parameter.py
[Docs] Replace
rst
style double-backtick with
md
single-backtick (
#27091
)
2025-10-17 02:47:34 -07:00
utils.py
[Quantization] FP8 Weight Reloading for Quantized RL Rollout (
#28480
)
2025-12-09 13:54:32 -08:00