<h1> How to Use Nsight Compute for Deep Learning Workload Analysis </h1> ![deep-learning](https://hackmd.io/_uploads/B1TAEDafZl.jpg) <p>Nsight Compute is NVIDIA&rsquo;s kernel profiler. If Nsight Systems tells you <em>where</em> time goes across CPU, GPU, and the runtime, Nsight Compute tells you <em>why a specific GPU kernel is slow</em> by showing occupancy, memory behavior, instruction mix, Tensor Core usage, and more. For deep learning, this is where you confirm whether your key kernels are compute bound, memory bound, or limited by something more subtle like poor launch configuration or low Tensor Core utilization.</p> <p>This blog walks through a practical workflow for using Nsight Compute to analyze deep learning workloads, with examples you can adapt for PyTorch, TensorFlow, and custom CUDA kernels.</p> <h2><strong>When to use Nsight Compute for deep learning</strong></h2> <p>Use Nsight Compute when you already know which kernels matter and you want deeper answers such as:</p> <ul> <li>Is my GEMM or convolution limited by memory bandwidth or math throughput?<br /><br /></li> <li>Are Tensor Cores being used effectively?<br /><br /></li> <li>Is occupancy low because of register pressure or shared memory usage?<br /><br /></li> <li>Are memory loads uncoalesced or thrashing cache?<br /><br /></li> <li>Why is this fused kernel slower than expected?<br /><br /></li> </ul> <p>If you do not yet know which kernels to inspect, start with Nsight Systems or PyTorch Profiler to identify hotspots, then zoom in with Nsight Compute.</p> <h2><strong>Key concept: you are profiling kernels, not &ldquo;the model&rdquo;</strong></h2> <p>Deep learning frameworks launch many GPU kernels per iteration. Nsight Compute profiles individual kernel launches. Your job is to identify the few kernels that dominate runtime, then analyze them deeply.</p> <p>In many models, the heavy hitters are:</p> <ul> <li>GEMMs from matmul and linear layers (often cuBLAS)<br /><br /></li> <li>Convolutions (often cuDNN)<br /><br /></li> <li>Attention primitives, including softmax, layernorm, and fused kernels<br /><br /></li> <li>Reductions and elementwise ops if fusion is poor<br /><br /></li> <li>Data movement kernels if the pipeline is inefficient<br /><br /></li> </ul> <h2><strong>Getting set up</strong></h2> <h3><strong>Install and verify access</strong></h3> <p>Nsight Compute comes with the <a href="https://acecloud.ai/blog/nvidia-cuda-cores-explained/">NVIDIA CUDA</a> toolkit or as a separate package depending on your platform. Confirm you can run the CLI:</p> <p>ncu --version</p> <p>For server environments, the CLI is usually enough. The GUI is great for interactive exploration and roofline charts, but it is optional.</p> <h3><strong>Profile in a stable environment</strong></h3> <p>Profiling changes timing and can perturb scheduling. For best results:</p> <ul> <li>Use a <a href="https://acecloud.ai/cloud/gpu/">dedicated GPU</a> if possible<br /><br /></li> <li>Fix input shapes and batch size<br /><br /></li> <li>Warm up the model before profiling<br /><br /></li> <li>Avoid profiling the first iteration if kernels are still caching or autotuning<br /><br /></li> </ul> <h2><strong>Step 1: Narrow to a small capture window</strong></h2> <p>Do not profile an entire training job. Instead, capture a short window around steady state.</p> <p>For PyTorch, a simple approach is to run a few warmup iterations, then a small number of profiled iterations.</p> <p>If you cannot easily &ldquo;pause&rdquo; the run, Nsight Compute supports capturing only specific kernels or only the first N launches, which is often good enough.</p> <h2><strong>Step 2: Collect a lightweight kernel list first</strong></h2> <p>Start with a minimal metric set to see which kernels dominate and to confirm you are targeting the right work. A typical pattern:</p> <p>ncu --target-processes all \</p> <p>--set default \</p> <p>--profile-from-start off \</p> <p>--launch-skip 100 \</p> <p>--launch-count 50 \</p> <p>-o ncu_report \</p> <p>python train_or_infer.py</p> <p>Notes:</p> <ul> <li>--launch-skip and --launch-count help you skip warmup launches and capture a stable window.<br /><br /></li> <li>--set default keeps overhead manageable.<br /><br /></li> <li>--target-processes all helps with multi process launchers, but use it carefully since it can generate a lot of data.<br /><br /></li> </ul> <p>Then open the report in the GUI or inspect from the CLI:</p> <p>ncu --import ncu_report.ncu-rep --page summary</p> <p>Look for the kernels with the highest total time. Often you will see cuBLAS GEMM kernels and cuDNN convolution kernels near the top.</p> <h2><strong>Step 3: Filter to the one kernel you care about</strong></h2> <p>Once you identify the kernel name, profile only that kernel with richer metrics. Filtering dramatically reduces overhead and noise.</p> <p>Example:</p> <p>ncu --target-processes all \</p> <p>--kernel-name "regex:.*gemm.*" \</p> <p>--launch-count 10 \</p> <p>--set full \</p> <p>-o gemm_deep \</p> <p>python train_or_infer.py</p> <p>If you are profiling attention, you might filter by patterns like fmha, flash, softmax, layernorm, or framework specific fused kernel names.</p> <h2><strong>Step 4: Use the &ldquo;Speed of Light&rdquo; view and roofline thinking</strong></h2> <p>Nsight Compute often organizes key findings in a &ldquo;Speed of Light&rdquo; style summary. The core question is: is the kernel limited by compute or by memory?</p> <p>A simple roofline mindset helps:</p> <ul> <li>If achieved FLOPs is far below peak, you might be memory bound or instruction limited.<br /><br /></li> <li>If memory bandwidth is near peak but math is low, you are likely memory bound.<br /><br /></li> <li>If both are low, you may have low occupancy, launch overhead, synchronization, or poor vectorization.<br /><br /></li> </ul> <p>For deep learning kernels, common scenarios include:</p> <ul> <li>GEMM is compute bound but not using Tensor Cores efficiently<br /><br /></li> <li>Softmax is memory bound with lots of reads and writes<br /><br /></li> <li>Layernorm is memory bound, and fusion matters more than micro tuning<br /><br /></li> <li>Small batch inference leads to underutilized compute due to small problem size<br /><br /></li> </ul> <h2><strong>Step 5: Check Tensor Core utilization</strong></h2> <p>If you expect Tensor Cores to be active, verify it. Things that often prevent Tensor Core use:</p> <ul> <li>Wrong dtype, such as FP32 path when you intended FP16 or BF16<br /><br /></li> <li>Shapes not aligned to preferred tile sizes<br /><br /></li> <li>Operations not hitting Tensor Core capable kernels<br /><br /></li> <li>Conversions or layout issues causing fallbacks<br /><br /></li> </ul> <p>Nsight Compute can reveal whether the kernel uses HMMA or Tensor Core instructions. If you see a GEMM that is not using Tensor Cores when it should, fix dtype, layout, and alignment first before any deeper tuning.</p> <h2><strong>Step 6: Diagnose occupancy and launch limits</strong></h2> <p>Low occupancy can be fine if the kernel is already saturating memory bandwidth or compute, but it can also be a red flag.</p> <p>Typical reasons occupancy is limited:</p> <ul> <li>High register usage per thread<br /><br /></li> <li>Large shared memory per block<br /><br /></li> <li>Block size that does not map well to SM resources<br /><br /></li> <li>Too few blocks overall because the problem size is small<br /><br /></li> </ul> <p>Nsight Compute shows achieved occupancy and the limiting factor. For deep learning inference at small batch sizes, the &ldquo;problem too small&rdquo; scenario is extremely common, and the fix is usually batching, fusion, or choosing kernels optimized for small shapes rather than chasing occupancy alone.</p> <h2><strong>Step 7: Investigate memory behavior</strong></h2> <p>If the kernel looks memory bound, focus on:</p> <ul> <li>Global memory load and store efficiency<br /><br /></li> <li>L2 cache hit rate<br /><br /></li> <li>Memory coalescing and transaction sizes<br /><br /></li> <li>Excessive reads and writes due to unfused ops<br /><br /></li> <li>Use of shared memory and whether it reduces global traffic<br /><br /></li> </ul> <p>In many deep learning graphs, the largest memory improvements come from fusion and avoiding intermediate tensors, not from micro optimizing a single kernel. Nsight Compute helps you prove this by showing how much bandwidth each kernel is consuming and whether the data movement is the true limiter.</p> <h2><strong>Step 8: Use NVTX to map kernels to model phases</strong></h2> <p>Deep learning runs can have hundreds of similar kernels. NVTX ranges make it easier to connect kernels to steps like forward, backward, optimizer, or specific layers.</p> <p>Many frameworks and libraries already emit NVTX ranges when enabled. If not, you can add NVTX ranges in your code and then use those ranges to guide what you capture and when.</p> <h2><strong>Practical tips and common pitfalls</strong></h2> <ul> <li>Profile fewer kernels with deeper metrics, not many kernels with everything.<br /><br /></li> <li>Re run profiling. Single runs can be noisy, especially with dynamic shapes or autotuning.<br /><br /></li> <li>Watch for profiler overhead. The &ldquo;full&rdquo; set can slow kernels significantly.<br /><br /></li> <li>Lock clocks only if you know what you are doing. Changing GPU clocks can improve repeatability but can also misrepresent production behavior.<br /><br /></li> <li>For multi GPU training, start with one GPU. Then profile a single rank and isolate communication kernels separately with Nsight Systems.<br /><br /></li> <li>In containers, ensure you have the right permissions and the host driver supports profiling. Some environments restrict access to performance counters.<br /><br /></li> </ul> <h2><strong>A simple end to end workflow</strong></h2> <ol> <li>Identify hotspot ops with PyTorch Profiler or Nsight Systems.<br /><br /></li> <li>Capture a short steady state window with Nsight Compute default set.<br /><br /></li> <li>Filter to the top kernel and switch to a richer set like full.<br /><br /></li> <li>Classify the kernel as compute bound or memory bound.<br /><br /></li> <li>Verify Tensor Core usage where expected.<br /><br /></li> <li>Use occupancy and memory metrics to decide the best fix, often dtype, layout, fusion, batching, or kernel choice.<br /><br /></li> <li>Re benchmark with the same inputs and compare throughput and latency.<br /><br /></li> </ol> <p>If you tell me whether you are profiling training or inference, your GPU model, and whether you are using FP16, BF16, or FP32, I can suggest a focused Nsight Compute command line and the highest value metric sections to collect for your specific workload.</p>