This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
5fbbfe9a4c13094ad72ed3d6b4ef208a7ddc0fd7
vllm
/
csrc
/
attention
History
Lucas Wilkinson
5fbbfe9a4c
Some checks failed
Create Release / Create Release (push)
Has been cancelled
Details
[BugFix] FA2 MLA Accuracy Issue (
#18807
)
...
Signed-off-by: LucasWilkinson <
lwilkinson@neuralmagic.com
>
2025-05-30 08:50:58 -07:00
..
mla
[NVIDIA] Support Cutlass MLA for Blackwell GPUs (
#16032
)
2025-04-27 06:29:21 -07:00
attention_dtypes.h
Enable scaled FP8 (e4m3fn) KV cache on ROCm (AMD GPU) (
#3290
)
2024-04-03 14:15:55 -07:00
attention_generic.cuh
[CI/Build] Enforce style for C++ and CUDA code with
clang-format
(
#4722
)
2024-05-22 07:18:41 +00:00
attention_kernels.cuh
fix: typos (
#18151
)
2025-05-15 02:16:15 -07:00
attention_utils.cuh
[AMD][CI/Build] Disambiguation of the function call for ROCm 6.2 headers compatibility (
#7477
)
2024-08-21 16:47:36 -07:00
dtype_bfloat16.cuh
[CI/Build] Suppress divide-by-zero and missing return statement warnings (
#7001
)
2024-08-05 16:00:01 -04:00
dtype_float16.cuh
[CI/Build] Enforce style for C++ and CUDA code with
clang-format
(
#4722
)
2024-05-22 07:18:41 +00:00
dtype_float32.cuh
[CI/Build] Enforce style for C++ and CUDA code with
clang-format
(
#4722
)
2024-05-22 07:18:41 +00:00
dtype_fp8.cuh
[CI/Build] Enforce style for C++ and CUDA code with
clang-format
(
#4722
)
2024-05-22 07:18:41 +00:00
merge_attn_states.cu
[BugFix] FA2 MLA Accuracy Issue (
#18807
)
2025-05-30 08:50:58 -07:00
paged_attention_v1.cu
[FP8][Kernel] Dynamic kv cache scaling factors computation (
#11906
)
2025-01-23 18:04:03 +00:00
paged_attention_v2.cu
[FP8][Kernel] Dynamic kv cache scaling factors computation (
#11906
)
2025-01-23 18:04:03 +00:00
vertical_slash_index.cu
Implements dual-chunk-flash-attn backend for dual chunk attention with sparse attention support (
#11844
)
2025-05-12 19:52:47 -07:00