This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
51931c5c9a3e36ccdf746f5fe3b4e770d557041d
vllm
/
vllm
/
entrypoints
/
openai
/
chat_completion
History
Cyrus Leung
51931c5c9a
[UX] Deduplicate sampling parameter startup logs (
#32953
)
...
Signed-off-by: DarkLight1337 <
tlleungac@connect.ust.hk
>
2026-01-24 17:37:28 +08:00
..
__init__.py
[Refactor] [6/N] to simplify the vLLM openai chat_completion serving architecture (
#32240
)
2026-01-13 13:01:39 +00:00
api_router.py
[Frontend] Add render endpoints for prompt preprocessing (
#32473
)
2026-01-19 12:21:46 +08:00
protocol.py
[bugfix] Fix online serving crash when text type response_format is received (
#26822
)
2026-01-16 12:23:54 +08:00
serving.py
[UX] Deduplicate sampling parameter startup logs (
#32953
)
2026-01-24 17:37:28 +08:00
stream_harmony.py
[MODEL] Fix handling of multiple channels for gpt-oss with speculative decoding (
#26291
)
2026-01-14 13:20:52 -05:00