This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
449de9001af69592618516b298aa1c5f321ded34
vllm
/
vllm
/
model_executor
/
layers
/
quantization
/
kernels
History
Xiangyu Li
5cc6bddb6e
[Kernel] Add GPTQv2 format support for low-bit or asymmetric quantization, by adapting gptq_gemm (
#26092
)
2025-10-23 23:26:13 -04:00
..
mixed_precision
[Kernel] Add GPTQv2 format support for low-bit or asymmetric quantization, by adapting gptq_gemm (
#26092
)
2025-10-23 23:26:13 -04:00
scaled_mm
[Chore] Clean up pytorch helper functions in
vllm.utils
(
#26908
)
2025-10-18 09:48:22 -07:00
__init__.py
[TPU][Quantization] TPU
W8A8
(
#11785
)
2025-01-08 19:33:29 +00:00