[Chore]:Extract math and argparse utilities to separate modules (#27188)

Signed-off-by: Yeshwanth Surya <yeshsurya@gmail.com>
Signed-off-by: Yeshwanth N <yeshsurya@gmail.com>
Signed-off-by: yeshsurya <yeshsurya@gmail.com>
This commit is contained in:
Yeshwanth N
2025-10-26 16:33:32 +05:30
committed by GitHub
parent 8fb7b2fab9
commit 71b1c8b667
125 changed files with 716 additions and 640 deletions

View File

@@ -6,7 +6,7 @@ import torch
from vllm.model_executor.layers.quantization.utils.quant_utils import group_broadcast
from vllm.platforms import current_platform
from vllm.utils import round_up
from vllm.utils.math_utils import round_up
# Using the default value (240.0) from pytorch will cause accuracy
# issue on dynamic quantization models. Here use 224.0 for rocm.