This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
808d6fd7b97f71f64a14ad4eecb9afd7b4d9dcf8
vllm
/
vllm
/
engine
History
Cyrus Leung
4753f3bf69
[Model] Use context managers for encoder- and LM-only mode (
#32605
)
...
Signed-off-by: DarkLight1337 <
tlleungac@connect.ust.hk
>
2026-01-20 11:43:38 +08:00
..
__init__.py
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
arg_utils.py
[Model] Use context managers for encoder- and LM-only mode (
#32605
)
2026-01-20 11:43:38 +08:00
async_llm_engine.py
[V0 Deprecation] Remove AsyncLLMEngine (
#25025
)
2025-09-18 11:07:42 -07:00
llm_engine.py
[V0 Deprecation] Remove LLMEngine (
#25033
)
2025-09-20 17:56:30 -07:00
protocol.py
[Bugfix] Read truncate_prompt_tokens from pooling_params in AsyncLLM.encode() (
#31013
)
2025-12-20 10:29:31 +00:00