This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
197473c4e71c99025a0fd3925d0f130bdbfa1e42
vllm
/
csrc
/
moe
/
marlin_moe_wna16
History
Bhanu Prakash Voutharoja
6a6fc41c79
gptq marlin quantization support for fused moe with lora (
#30254
)
...
Signed-off-by: Bhanu068 <
voutharoja.bhanu06@gmail.com
>
2025-12-12 02:27:22 +00:00
..
.gitignore
[Kernel][Quantization] add w4a8 support for marlin kernel (
#24722
)
2025-11-29 07:19:33 -08:00
generate_kernels.py
[Kernel][Quantization] add w4a8 support for marlin kernel (
#24722
)
2025-11-29 07:19:33 -08:00
kernel.h
[Kernel][Quantization] add w4a8 support for marlin kernel (
#24722
)
2025-11-29 07:19:33 -08:00
marlin_template.h
[Kernel][Quantization] add w4a8 support for marlin kernel (
#24722
)
2025-11-29 07:19:33 -08:00
ops.cu
gptq marlin quantization support for fused moe with lora (
#30254
)
2025-12-12 02:27:22 +00:00