Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
c2bd2196fc8faf2a9a3e5e79931839ae60d2ab9d
vllm/csrc/cutlass_extensions
History
kushanam f89978ad7c add cutlass support for blackwell fp8 gemm (#13798)
2025-03-04 07:55:07 -08:00
..
epilogue
add cutlass support for blackwell fp8 gemm (#13798)
2025-03-04 07:55:07 -08:00
gemm
[Kernel] Update cutlass_scaled_mm to support 2d group (blockwise) scaling (#11868)
2025-01-30 18:33:00 -08:00
common.cpp
[Kernel]: Cutlass 2:4 Sparsity + FP8/Int8 Quant Support (#10995)
2024-12-18 09:57:16 -05:00
common.hpp
[Kernel] Update cutlass_scaled_mm to support 2d group (blockwise) scaling (#11868)
2025-01-30 18:33:00 -08:00
cute_utils.cuh
[Kernel] Initial Machete W4A8 support + Refactors (#9855)
2024-11-18 12:59:29 -07:00
torch_utils.hpp
[MISC] Replace c10::optional with std::optional (#11730)
2025-01-05 10:20:34 +09:00
vllm_collective_builder.cuh
[Kernel] Update cutlass_scaled_mm to support 2d group (blockwise) scaling (#11868)
2025-01-30 18:33:00 -08:00
vllm_custom_types.cuh
[Kernel] (1/N) Machete - Hopper Optimized Mixed Precision Linear Kernel (#7174)
2024-08-20 07:09:33 -06:00
vllm_cutlass_library_extension.py
Update deprecated Python 3.8 typing (#13971)
2025-03-02 17:34:51 -08:00
vllm_numeric_conversion.cuh
[Kernel] Initial Machete W4A8 support + Refactors (#9855)
2024-11-18 12:59:29 -07:00
vllm_type_utils.cuh
[Kernel] Initial Machete W4A8 support + Refactors (#9855)
2024-11-18 12:59:29 -07:00
Powered by Gitea Version: 1.25.2 Page: 234ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API