Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
a32cb49b60688fb64a6d3d7f86378b4d2fad06e6
vllm/tests/entrypoints/llm
History
Cyrus Leung f0a1c8453a [Frontend] Use new Renderer for Completions and Tokenize API (#32863)
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2026-01-31 04:51:15 -08:00
..
__init__.py
…
test_accuracy.py
[V0 Deprecation] Remove VLLM_USE_V1 from tests (#26341)
2025-10-07 15:42:31 +00:00
test_chat.py
[Frontend] Use new Renderer for Completions and Tokenize API (#32863)
2026-01-31 04:51:15 -08:00
test_collective_rpc.py
[CI] Replace large models with tiny alternatives in tests (#24057)
2025-10-16 15:51:27 +01:00
test_generate.py
[Bugfix][Frontend] validate arg priority in frontend LLM class before add request (#27596)
2025-10-28 14:02:43 +00:00
test_gpu_utilization.py
…
test_mm_cache_stats.py
[Metrics] Add test for multi-modal cache stats logging (#26588)
2025-10-10 16:00:50 +00:00
test_prompt_validation.py
[Frontend] Require flag for loading text and image embeds (#27204)
2025-10-22 15:52:02 +00:00
Powered by Gitea Version: 1.25.2 Page: 5325ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API