This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
f2faac745dc9b9d7d2fa92a9a4cfba6b230db2d4
vllm
/
examples
/
offline_inference
/
qwen_1m.py
Tao He
60f7624334
Implements dual-chunk-flash-attn backend for dual chunk attention with sparse attention support (
#11844
)
2025-05-12 19:52:47 -07:00
2.0 KiB
Raw
Blame
History
View Raw
Reference in New Issue
View Git Blame
Copy Permalink