[Quantization] Deprecate Long Tail of Schemes (#31688)

Signed-off-by: Robert Shaw <robshaw@redhat.com>
Signed-off-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: Robert Shaw <robshaw@redhat.com>
Co-authored-by: Wentao Ye <44945378+yewentao256@users.noreply.github.com>
This commit is contained in:
Robert Shaw
2026-01-08 19:07:45 -05:00
committed by GitHub
parent d62cfe546d
commit 5825bbc1f7
8 changed files with 61 additions and 5 deletions

View File

@@ -34,6 +34,10 @@ def test_model_experts_int8_startup(
model_info.check_transformers_version(on_fail="skip")
with vllm_runner(
model, dtype=dtype, enforce_eager=True, quantization="experts_int8"
model,
dtype=dtype,
enforce_eager=True,
quantization="experts_int8",
allow_deprecated_quantization=True,
) as vllm_model:
vllm_model.generate_greedy(example_prompts, max_tokens)