Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
3a4e10c8477c329b9e75ba55ff205a1f258cbd01
vllm/csrc/moe
History
Wentao Ye f28125d87b [Perf] Optimize grouped topk kernel, 1.2%~2% E2E Throughput improvement (#32058)
Signed-off-by: yewentao256 <zhyanwentao@126.com>
2026-01-13 10:58:18 -08:00
..
marlin_moe_wna16
[Quantization][MoE] remove unused ep logic from moe marlin (#31571)
2026-01-06 09:07:19 -08:00
permute_unpermute_kernels
Fix CUDA permute/unpermute for use with DeepGemm Moe (#17934)
2025-07-27 07:08:00 -07:00
dynamic_4bit_int_moe_cpu.cpp
[CPU]Parallelize over tokens in int4 moe (#29600)
2025-12-02 06:21:39 +00:00
grouped_topk_kernels.cu
[Perf] Optimize grouped topk kernel, 1.2%~2% E2E Throughput improvement (#32058)
2026-01-13 10:58:18 -08:00
moe_align_sum_kernels.cu
Lora MoE Align Improvements (#29257)
2025-12-09 10:35:16 +08:00
moe_ops.h
Lora MoE Align Improvements (#29257)
2025-12-09 10:35:16 +08:00
moe_permute_unpermute_op.cu
[Kernel] CUTLASS MoE FP8: Integrate cuda moe permute/unpermute (#23045)
2025-08-20 10:35:26 -04:00
moe_wna16_utils.h
pre-commit autoupdate (#17380)
2025-04-29 06:46:55 -07:00
moe_wna16.cu
[BugFix] Accuracy fix for llama4 int4 - improperly casted scales (#16801)
2025-04-17 22:13:29 -07:00
topk_softmax_kernels.cu
[Kernel][Performance] Fuse float cast and renormalize to topk softmax kernel (#26717)
2025-10-17 07:30:35 +00:00
torch_bindings.cpp
[Quantization][MoE] remove unused ep logic from moe marlin (#31571)
2026-01-06 09:07:19 -08:00
Powered by Gitea Version: 1.25.2 Page: 287ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API