Chopper: A Multi-Level GPU Characterization Tool & Derived Insights Into LLM Training Inefficiency
By: Marco Kurzynski, Shaizeen Aga, Di Wu
Potential Business Impact:
Makes AI training faster and use less power.
Training large language models (LLMs) efficiently requires a deep understanding of how modern GPU systems behave under real-world distributed training workloads. While prior work has focused primarily on kernel-level performance or single-GPU microbenchmarks, the complex interaction between communication, computation, memory behavior, and power management in multi-GPU LLM training remains poorly characterized. In this work, we introduce Chopper, a profiling and analysis framework that collects, aligns, and visualizes GPU kernel traces and hardware performance counters across multiple granularities (i.e., from individual kernels to operations, layers, phases, iterations, and GPUs). Using Chopper, we perform a comprehensive end-to-end characterization of Llama 3 8B training under fully sharded data parallelism (FSDP) on an eight-GPU AMD InstinctTM MI300X node. Our analysis reveals several previously underexplored bottlenecks and behaviors, such as memory determinism enabling higher, more stable GPU and memory frequencies. We identify several sources of inefficiencies, with frequency overhead (DVFS effects) being the single largest contributor to the gap between theoretical and observed performance, exceeding the impact of MFMA utilization loss, communication/computation overlap, and kernel launch overheads. Overall, Chopper provides the first holistic, multi-granularity characterization of LLM training on AMD InstinctTM MI300X GPUs, yielding actionable insights for optimizing training frameworks, improving power-management strategies, and guiding future GPU architecture and system design.
Similar Papers
A Systematic Characterization of LLM Inference on GPUs
Hardware Architecture
Makes AI understand and work much faster.
Characterizing the Efficiency of Distributed Training: A Power, Performance, and Thermal Perspective
Distributed, Parallel, and Cluster Computing
Makes AI models train faster on many computers.
Characterizing the Efficiency of Distributed Training: A Power, Performance, and Thermal Perspective
Distributed, Parallel, and Cluster Computing
Makes AI learn faster on many computers.