This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
0dd5dee9b9bc88453f5f3eacfde751e6b9ba4871
vllm
/
vllm
/
model_executor
History
xuebwang-amd
0dd5dee9b9
[Bugfix][Kernel] fix bias adding in triton kernel implemented fused moe (
#31676
)
...
Signed-off-by: xuebwang-amd <
xuebwang@amd.com
>
2026-01-07 07:36:13 +00:00
..
layers
[Bugfix][Kernel] fix bias adding in triton kernel implemented fused moe (
#31676
)
2026-01-07 07:36:13 +00:00
model_loader
[Docs] Improve malformed exception caused by backslash line continuations (
#31694
)
2026-01-05 17:51:54 -08:00
models
[BugFix] LoRA: Support loading base_layer of experts (
#31104
)
2026-01-07 14:49:39 +08:00
warmup
[UX] Reduce DeepGEMM warmup log output to single progress bar (
#30903
)
2025-12-17 20:21:51 -08:00
__init__.py
[Platform] Deprecate seed_everything (
#31659
)
2026-01-04 18:34:04 -08:00
custom_op.py
[Bugfix][CPU] Fix RotaryEmbedding fallback causing gibberish with --enforce-eager (
#31643
)
2026-01-06 01:25:38 +08:00
parameter.py
[Docs] Replace
rst
style double-backtick with
md
single-backtick (
#27091
)
2025-10-17 02:47:34 -07:00
utils.py
[Platform] Deprecate seed_everything (
#31659
)
2026-01-04 18:34:04 -08:00