Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
2f4a71daf200f4840d11435d932c676e943f2de3
vllm/vllm/tokenizers
History
Chauncey fefce49807 [Refactor] [6/N] to simplify the vLLM openai chat_completion serving architecture (#32240)
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
2026-01-13 13:01:39 +00:00
..
__init__.py
[Chore][1/2] Drop v0.14 deprecations (#31285)
2025-12-24 09:54:01 -08:00
deepseek_v32_encoding.py
Add chat prefix completion feature to DeepSeek v3.2 (#31147)
2026-01-05 11:20:25 +08:00
deepseek_v32.py
[Misc] Implement TokenizerLike.convert_tokens_to_ids (#31796)
2026-01-06 12:08:22 +00:00
detokenizer_utils.py
[Chore] Move detokenizer_utils to vllm/tokenizers (#29727)
2025-11-29 06:25:17 -08:00
grok2.py
[Model] Add Grok-2 (#31847)
2026-01-08 04:59:48 -08:00
hf.py
[Refactor] TokenizerRegistry only uses lazy imports (#30609)
2025-12-13 23:16:22 +08:00
mistral.py
[Refactor] [6/N] to simplify the vLLM openai chat_completion serving architecture (#32240)
2026-01-13 13:01:39 +00:00
protocol.py
[Misc] Implement TokenizerLike.convert_tokens_to_ids (#31796)
2026-01-06 12:08:22 +00:00
registry.py
[Model] Add Grok-2 (#31847)
2026-01-08 04:59:48 -08:00
Powered by Gitea Version: 1.25.2 Page: 620ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API