This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
54e2f83d0a82462e0128e5d852e3d46fbb566a7f
vllm
/
vllm
/
entrypoints
/
openai
/
chat_completion
History
Neil Schemenauer
54e2f83d0a
[Feature] Lazy import for the "mistral" tokenizer module. (
#34651
)
...
Signed-off-by: Neil Schemenauer <
nas@arctrix.com
>
2026-02-23 00:43:01 -08:00
..
__init__.py
[Refactor] [6/N] to simplify the vLLM openai chat_completion serving architecture (
#32240
)
2026-01-13 13:01:39 +00:00
api_router.py
[bugfix] Fix critical bug when reporting for all paths where handler.create_error_response is used (
#34516
)
2026-02-14 23:24:25 -08:00
protocol.py
[BugFix]: Fix local mypy issues (
#34739
)
2026-02-23 00:40:29 -08:00
serving.py
[Feature] Lazy import for the "mistral" tokenizer module. (
#34651
)
2026-02-23 00:43:01 -08:00
stream_harmony.py
[MODEL] Fix handling of multiple channels for gpt-oss with speculative decoding (
#26291
)
2026-01-14 13:20:52 -05:00