This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
eed74a558ffacc9a456d440b5d2ec1ca869e80b5
vllm
/
csrc
/
attention
History
wbn
dacaf5a400
Replace head_mapping params with num_kv_heads to attention kernel. (
#1997
)
...
Co-authored-by: wangguoya <
wangguoya@baidu.com
> Co-authored-by: Yang Zhao <
zhaoyangstar@foxmail.com
>
2023-12-10 10:12:53 -08:00
..
attention_dtypes.h
Improve setup script & Add a guard for bfloat16 kernels (
#130
)
2023-05-27 00:59:32 -07:00
attention_generic.cuh
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
attention_kernels.cu
Replace head_mapping params with num_kv_heads to attention kernel. (
#1997
)
2023-12-10 10:12:53 -08:00
attention_utils.cuh
Merge EmbeddedLLM/vllm-rocm into vLLM main (
#1836
)
2023-12-07 23:16:52 -08:00
dtype_bfloat16.cuh
Merge EmbeddedLLM/vllm-rocm into vLLM main (
#1836
)
2023-12-07 23:16:52 -08:00
dtype_float16.cuh
Merge EmbeddedLLM/vllm-rocm into vLLM main (
#1836
)
2023-12-07 23:16:52 -08:00
dtype_float32.cuh
[BugFix] Fix NaN errors in paged attention kernel (
#936
)
2023-09-04 09:20:06 +09:00