Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
d93bf4da855a0c5e8d3c875def6b37c5e9d77763
vllm/vllm/engine
History
Robert Shaw e29d4358ef [V1] Include Engine Version in Logs (#12496)
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
2025-01-28 08:27:41 +00:00
..
multiprocessing
[Core] Support reset_prefix_cache (#12284)
2025-01-22 18:52:27 +00:00
output_processor
[BUGFIX] When skip_tokenize_init and multistep are set, execution crashes (#12277)
2025-01-21 23:30:46 +00:00
__init__.py
Change the name to vLLM (#150)
2023-06-17 03:07:40 -07:00
arg_utils.py
[Frontend] generation_config.json for maximum tokens(#12242)
2025-01-26 19:59:25 +08:00
async_llm_engine.py
[Core] Support reset_prefix_cache (#12284)
2025-01-22 18:52:27 +00:00
async_timeout.py
[Bugfix] AsyncLLMEngine hangs with asyncio.run (#5654)
2024-06-19 13:57:12 -07:00
llm_engine.py
[V1] Include Engine Version in Logs (#12496)
2025-01-28 08:27:41 +00:00
metrics_types.py
monitor metrics of tokens per step using cudagraph batchsizes (#11031)
2024-12-09 22:35:36 -08:00
metrics.py
[Misc] Remove deprecated code (#12383)
2025-01-24 14:45:20 -05:00
protocol.py
[Core] Support reset_prefix_cache (#12284)
2025-01-22 18:52:27 +00:00
Powered by Gitea Version: 1.25.2 Page: 17ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API