[Docs] Fix warnings in vllm/profiler and vllm/transformers_utils (#25220)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
This commit is contained in:
Michael Yao
2025-09-21 07:39:47 +08:00
committed by GitHub
parent bef180f009
commit 367a480bd3
3 changed files with 4 additions and 4 deletions

View File

@@ -74,8 +74,7 @@ class JAISConfig(PretrainedConfig):
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values
attentions (not used by all models).
scale_attn_by_inverse_layer_idx
(`bool`, *optional*, defaults to `False`):
scale_attn_by_inverse_layer_idx (`bool`, *optional*, default `True`):
Whether to additionally scale attention weights
by `1 / layer_idx + 1`.
reorder_and_upcast_attn (`bool`, *optional*, defaults to `False`):