This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
3f28174c6a6d16dd42405016986a36f9d17e57c0
vllm
/
vllm
/
entrypoints
/
openai
/
chat_completion
History
Cyrus Leung
3f28174c6a
[Frontend] Standardize use of
create_error_response
(
#32319
)
...
Signed-off-by: DarkLight1337 <
tlleungac@connect.ust.hk
>
2026-01-14 11:22:26 +00:00
..
__init__.py
[Refactor] [6/N] to simplify the vLLM openai chat_completion serving architecture (
#32240
)
2026-01-13 13:01:39 +00:00
api_router.py
[Frontend] Standardize use of
create_error_response
(
#32319
)
2026-01-14 11:22:26 +00:00
protocol.py
[Refactor] [6/N] to simplify the vLLM openai chat_completion serving architecture (
#32240
)
2026-01-13 13:01:39 +00:00
serving.py
[Refactor] [6/N] to simplify the vLLM openai chat_completion serving architecture (
#32240
)
2026-01-13 13:01:39 +00:00
stream_harmony.py
[Refactor] [6/N] to simplify the vLLM openai chat_completion serving architecture (
#32240
)
2026-01-13 13:01:39 +00:00