[Deprecation] Remove prompt_token_ids arg fallback in LLM.generate and LLM.embed (#18800)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
This commit is contained in:
Cyrus Leung
2025-08-22 10:56:57 +08:00
committed by GitHub
parent 19fe1a0510
commit 8896eb72eb
24 changed files with 116 additions and 467 deletions

View File

@@ -46,5 +46,5 @@ def test_lm_head(
vllm_model.apply_model(check_model)
print(
vllm_model.generate_greedy(prompts=["Hello my name is"],
vllm_model.generate_greedy(["Hello my name is"],
max_tokens=10)[0][1])