Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
449de9001af69592618516b298aa1c5f321ded34
vllm/vllm/model_executor/layers/quantization/kernels
History
Xiangyu Li 5cc6bddb6e [Kernel] Add GPTQv2 format support for low-bit or asymmetric quantization, by adapting gptq_gemm (#26092)
2025-10-23 23:26:13 -04:00
..
mixed_precision
[Kernel] Add GPTQv2 format support for low-bit or asymmetric quantization, by adapting gptq_gemm (#26092)
2025-10-23 23:26:13 -04:00
scaled_mm
[Chore] Clean up pytorch helper functions in vllm.utils (#26908)
2025-10-18 09:48:22 -07:00
__init__.py
[TPU][Quantization] TPU W8A8 (#11785)
2025-01-08 19:33:29 +00:00
Powered by Gitea Version: 1.25.2 Page: 344ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API