Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
4d31cd424bdd5935cefa8f03e137bba127be31dd
vllm/vllm/entrypoints/openai
History
Brendan Wong 4d31cd424b [Frontend] merge beam search implementations (#9296)
2024-10-14 15:05:52 -07:00
..
tool_parsers
[Bugfix] Access get_vocab instead of vocab in tool parsers (#9188)
2024-10-09 08:59:57 -06:00
__init__.py
Change the name to vLLM (#150)
2023-06-17 03:07:40 -07:00
api_server.py
[Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (#8537)
2024-10-08 09:38:40 -07:00
cli_args.py
[Frontend] Add Early Validation For Chat Template / Tool Call Parser (#9151)
2024-10-08 14:31:26 +00:00
logits_processors.py
[mypy] Enable following imports for entrypoints (#7248)
2024-08-20 23:28:21 -07:00
protocol.py
[core] remove beam search from the core (#9105)
2024-10-07 05:47:04 +00:00
run_batch.py
[Core] Support Lora lineage and base model metadata management (#6315)
2024-09-20 06:20:56 +00:00
serving_chat.py
[Frontend] merge beam search implementations (#9296)
2024-10-14 15:05:52 -07:00
serving_completion.py
[Frontend] merge beam search implementations (#9296)
2024-10-14 15:05:52 -07:00
serving_embedding.py
Adds truncate_prompt_tokens param for embeddings creation (#8999)
2024-10-04 18:31:40 +00:00
serving_engine.py
[Frontend] API support for beam search (#9087)
2024-10-05 23:39:03 -07:00
serving_tokenization.py
[Frontend] Added support for HF's new continue_final_message parameter (#8942)
2024-09-29 17:59:47 +00:00
Powered by Gitea Version: 1.25.2 Page: 164ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API