This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
8332078cfdbd5e44e527893b695e79052d008172
vllm
/
vllm
/
model_executor
/
models
/
transformers
History
Harry Mellor
dfe5e31689
Don't compile vision encoder for Transformers backend (
#30518
)
...
Signed-off-by: Harry Mellor <
19981378+hmellor@users.noreply.github.com
>
2026-04-02 12:42:29 +00:00
..
__init__.py
Don't compile vision encoder for Transformers backend (
#30518
)
2026-04-02 12:42:29 +00:00
base.py
Don't compile vision encoder for Transformers backend (
#30518
)
2026-04-02 12:42:29 +00:00
causal.py
Fix pipeline parallel with multimodal models with the Transformers modelling backend (
#37057
)
2026-03-16 10:20:37 +00:00
legacy.py
[Bugfix] Fix RoBERTa position_ids accumulation on CUDA graph padding (
#37884
)
2026-03-23 15:15:12 +00:00
moe.py
[MoE Refactor] Make SharedExperts class for use with DefaultMoERunner (
#35153
)
2026-04-01 09:44:08 -04:00
multimodal.py
Don't compile vision encoder for Transformers backend (
#30518
)
2026-04-02 12:42:29 +00:00
pooling.py
[Doc] Fix duplicate words in comments (
#36713
)
2026-03-10 21:28:31 -07:00
utils.py
Replace
nn.ConvNd
with vLLM's
ConvNdLayer
for Transformers modeling backend (
#31498
)
2025-12-29 16:20:01 +00:00