Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
ece2825a29e6b54ce6b114c27ec7ea498c66b416
vllm/tests/v1/sample
History
杰兮 48d15a32aa [CI] Fix Bad_words test for tokenizer encode/decode asymmetry (#28193)
Signed-off-by: zhyajie <yajizhan@amd.com>
Co-authored-by: zhyajie <yajizhan@amd.com>
2025-12-02 00:02:12 -08:00
..
__init__.py
[V1] Adding min tokens/repetition/presence/frequence penalties to V1 sampler (#10681)
2024-12-26 19:02:58 +09:00
test_logprobs_e2e.py
Convert formatting to use ruff instead of yapf + isort (#26247)
2025-10-05 07:06:22 -07:00
test_logprobs.py
[BugFix] Fix chunked prompt logprobs + preemption (#29071)
2025-11-22 16:07:18 -05:00
test_rejection_sampler.py
[V1][spec decode] return logprobs for spec decoding (#26060)
2025-10-22 22:59:59 -07:00
test_sampler.py
[Chore] Separate out vllm.utils.platform_utils.py (#27374)
2025-10-23 19:08:06 +00:00
test_sampling_params_e2e.py
[CI] Fix Bad_words test for tokenizer encode/decode asymmetry (#28193)
2025-12-02 00:02:12 -08:00
test_topk_topp_sampler.py
[Kernel] Lazy import FlashInfer (#26977)
2025-10-17 04:48:18 +00:00
utils.py
[Chore] Clean up pytorch helper functions in vllm.utils (#26908)
2025-10-18 09:48:22 -07:00
Powered by Gitea Version: 1.25.2 Page: 529ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API