This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
df4d3a44a83681feea723cc4c4ebe9085d29d58d
vllm
/
vllm
/
model_executor
/
models
/
transformers
History
Jee Jee Li
9d1c474704
[LoRA][1/N]Remove LoRA extra vocab (
#28382
)
...
Signed-off-by: Jee Jee Li <
pandaleefree@gmail.com
>
2025-11-11 11:06:21 -08:00
..
__init__.py
Refactor Transformers backend to use mixins (
#26906
)
2025-10-16 21:50:39 +00:00
base.py
[Bugfix] Fix encoder-only model support for transformers backend (
#28021
)
2025-11-04 22:24:41 -08:00
causal.py
[LoRA][1/N]Remove LoRA extra vocab (
#28382
)
2025-11-11 11:06:21 -08:00
legacy.py
Fix pooling adapters for Transformers backend (
#27338
)
2025-10-23 20:23:55 -07:00
moe.py
[BugFix] Support EP/DP + EPLB with MTP (
#25311
)
2025-11-05 15:22:17 +00:00
multimodal.py
[Model] Pass
mm_features
directly into
get_mrope_input_positions
(
#28399
)
2025-11-11 21:14:48 +08:00
pooling.py
Fix pooling adapters for Transformers backend (
#27338
)
2025-10-23 20:23:55 -07:00
utils.py
Fix issues from
#28242
(
#28257
)
2025-11-07 04:23:17 +00:00