Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
250ee65d72a0c7b86ec5cea9cbe9377da21d6439
vllm/vllm/transformers_utils
History
Flávia Béo 250ee65d72 [BUG] Remove token param #10921 (#11022)
Signed-off-by: Flavia Beo <flavia.beo@ibm.com>
2024-12-10 17:38:15 +00:00
..
configs
[Model] Support telechat2 (#10311)
2024-11-27 11:32:35 +00:00
tokenizer_group
[LoRA] Change lora_tokenizers capacity (#10796)
2024-12-04 17:40:16 +00:00
tokenizers
[Bugfix] Ensure special tokens are properly filtered out for guided structured output with MistralTokenizer (#10363)
2024-11-15 14:50:40 +00:00
__init__.py
Fix the log to correct guide user to install modelscope (#9793)
2024-10-29 10:36:59 -07:00
config.py
[BUG] Remove token param #10921 (#11022)
2024-12-10 17:38:15 +00:00
detokenizer_utils.py
[V1] Implement vLLM V1 [1/N] (#9289)
2024-10-22 01:24:07 -07:00
detokenizer.py
[Bugfix] fix detokenizer shallow copy (#5919)
2024-10-22 15:38:12 -07:00
processor.py
[Model] Support Pixtral models in the HF Transformers format (#9036)
2024-10-18 13:29:56 -06:00
tokenizer.py
[Bugfix][Frontend] Guard against bad token ids (#9634)
2024-10-29 14:13:20 -07:00
utils.py
[Core][Bugfix] Accept GGUF model without .gguf extension (#8056)
2024-09-02 08:43:26 -04:00
Powered by Gitea Version: 1.25.2 Page: 60ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API