Files
vllm/tests/evals/gsm8k/configs/moe-refactor/Llama-4-Scout-BF16-fi-cutlass.yaml

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

10 lines
304 B
YAML
Raw Normal View History

model_name: "meta-llama/Llama-4-Scout-17B-16E-Instruct"
accuracy_threshold: 0.92
num_questions: 1319
num_fewshot: 5
server_args: "--enforce-eager --max-model-len 8192 --tensor-parallel-size 2 --enable-expert-parallel"
env:
VLLM_USE_FLASHINFER_MOE_FP16: "1"
VLLM_FLASHINFER_MOE_BACKEND: "throughput"