This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
1356df53bd5d6877358aff3d2bbd95f28f8009a4
vllm
/
vllm
/
attention
/
ops
History
youkaichao
20cfcdec99
[Core][Optimization] change python dict to pytorch tensor for blocks to swap (
#4659
)
2024-05-08 12:07:05 -07:00
..
__init__.py
[Core] Refactor Attention Take 2 (
#3462
)
2024-03-25 04:39:33 +00:00
paged_attn.py
[Core][Optimization] change python dict to pytorch tensor for blocks to swap (
#4659
)
2024-05-08 12:07:05 -07:00
prefix_prefill.py
[Bugfix][Kernel] allow non-power-of-2 for prefix prefill with alibi (
#4573
)
2024-05-08 09:19:58 -07:00
triton_flash_attention.py
[ROCm][Hardware][AMD] Enable group query attention for triton FA (
#4406
)
2024-04-26 23:37:40 -07:00