[core] overhaul memory profiling and fix backward compatibility (#10511)
Signed-off-by: youkaichao <youkaichao@gmail.com>
This commit is contained in:
@@ -487,11 +487,12 @@ class EngineArgs:
|
||||
help='The fraction of GPU memory to be used for the model '
|
||||
'executor, which can range from 0 to 1. For example, a value of '
|
||||
'0.5 would imply 50%% GPU memory utilization. If unspecified, '
|
||||
'will use the default value of 0.9. This is a global gpu memory '
|
||||
'utilization limit, for example if 50%% of the gpu memory is '
|
||||
'already used before vLLM starts and --gpu-memory-utilization is '
|
||||
'set to 0.9, then only 40%% of the gpu memory will be allocated '
|
||||
'to the model executor.')
|
||||
'will use the default value of 0.9. This is a per-instance '
|
||||
'limit, and only applies to the current vLLM instance.'
|
||||
'It does not matter if you have another vLLM instance running '
|
||||
'on the same GPU. For example, if you have two vLLM instances '
|
||||
'running on the same GPU, you can set the GPU memory utilization '
|
||||
'to 0.5 for each instance.')
|
||||
parser.add_argument(
|
||||
'--num-gpu-blocks-override',
|
||||
type=int,
|
||||
|
||||
Reference in New Issue
Block a user