diff --git a/docs/contributing/profiling.md b/docs/contributing/profiling.md index e4bb0b696..1d12d6354 100644 --- a/docs/contributing/profiling.md +++ b/docs/contributing/profiling.md @@ -3,6 +3,10 @@ !!! warning Profiling is only intended for vLLM developers and maintainers to understand the proportion of time spent in different parts of the codebase. **vLLM end-users should never turn on profiling** as it will significantly slow down the inference. +!!! tip "Choosing a profiler" + - Use **Nsight Systems** for low-overhead, performance-critical profiling. + - Use **PyTorch Profiler** for medium-overhead profiling with richer debugging information (e.g., stack traces, memory, shapes). Note that enabling these features adds overhead and is not recommended for benchmarking. + ## Profile with PyTorch Profiler We support tracing vLLM workers using different profilers. You can enable profiling by setting the `--profiler-config` flag when launching the server. diff --git a/vllm/config/profiler.py b/vllm/config/profiler.py index 6a40b9dad..e79e21310 100644 --- a/vllm/config/profiler.py +++ b/vllm/config/profiler.py @@ -45,10 +45,10 @@ class ProfilerConfig: worker's traces (CPU & GPU) will be saved under this directory. Note that it must be an absolute path.""" - torch_profiler_with_stack: bool = False - """If `True`, enables stack tracing in the torch profiler. Disabled by default - to reduce overhead. Can be enabled via VLLM_TORCH_PROFILER_WITH_STACK=1 env var - or --profiler-config.torch_profiler_with_stack=true CLI flag.""" + torch_profiler_with_stack: bool = True + """If `True`, enables stack tracing in the torch profiler. Enabled by default + as it is useful for debugging. Can be disabled via + --profiler-config.torch_profiler_with_stack=false CLI flag.""" torch_profiler_with_flops: bool = False """If `True`, enables FLOPS counting in the torch profiler. Disabled by default."""