This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
93abf23a648051fe6dc053ba0b74499d119920bf
vllm
/
vllm
/
entrypoints
History
Brad Hilton
9c3dadd1c9
[Frontend] Add
logits_processors
as an extra completion argument (
#11150
)
...
Signed-off-by: Brad Hilton <
brad.hilton.nw@gmail.com
>
2024-12-14 16:46:42 +00:00
..
openai
[Frontend] Add
logits_processors
as an extra completion argument (
#11150
)
2024-12-14 16:46:42 +00:00
__init__.py
Change the name to vLLM (
#150
)
2023-06-17 03:07:40 -07:00
api_server.py
bugfix: fix the bug that stream generate not work (
#2756
)
2024-11-09 10:09:48 +00:00
chat_utils.py
[Model]: Add support for Aria model (
#10514
)
2024-11-25 18:10:55 +00:00
launcher.py
[Core][Bugfix][Perf] Introduce
MQLLMEngine
to avoid
asyncio
OH (
#8157
)
2024-09-18 13:56:58 +00:00
llm.py
[Core] V1: Use multiprocessing by default (
#11074
)
2024-12-13 16:27:32 -08:00
logger.py
[Frontend] API support for beam search (
#9087
)
2024-10-05 23:39:03 -07:00