This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
025a32f9ed53b69c90be8a8883f5c9d880880d8a
vllm
/
vllm
/
model_executor
/
layers
/
attention
History
Lucas Kabela
ea6d067a2a
[Misc][LLaMa4] Compile LLaMa Vision Encoder (
#30709
)
...
Signed-off-by: Lucas Kabela <
lucaskabela@meta.com
>
2026-01-09 22:01:38 -05:00
..
__init__.py
[1/N][Attention] Restructure attention: move files (
#31916
)
2026-01-09 13:10:24 -08:00
chunked_local_attention.py
[1/N][Attention] Restructure attention: move files (
#31916
)
2026-01-09 13:10:24 -08:00
cross_attention.py
[1/N][Attention] Restructure attention: move files (
#31916
)
2026-01-09 13:10:24 -08:00
encoder_only_attention.py
[1/N][Attention] Restructure attention: move files (
#31916
)
2026-01-09 13:10:24 -08:00
mm_encoder_attention.py
[Misc][LLaMa4] Compile LLaMa Vision Encoder (
#30709
)
2026-01-09 22:01:38 -05:00
static_sink_attention.py
[1/N][Attention] Restructure attention: move files (
#31916
)
2026-01-09 13:10:24 -08:00