Score: 0

Microbenchmarking NVIDIA's Blackwell Architecture: An in-depth Architectural Analysis

Published: December 1, 2025 | arXiv ID: 2512.02189v1

By: Aaron Jarmusch, Sunita Chandrasekaran

Potential Business Impact:

Makes computers learn and work much faster.

Business Areas:
GPU Hardware

As GPU architectures rapidly evolve to meet the overcoming demands of exascale computing and machine learning, the performance implications of architectural innovations remain poorly understood across diverse workloads. NVIDIA's Blackwell (B200) generation introduce significant architectural advances including the 5th generation tensor cores, tensor memory (TMEM), decompression engine (DE), and dual chips; however systematic methodologies for quantifying these improvements lag behind hardware development cycles. We contribute an open-source microbenchmark suite that offers practical insights into optimizing workloads to fully utilize the rich feature sets of the modern GPU architecture. This work aims to enable application developers make informed architectural decisions and guide future GPU design directions. Our work studies Blackwell GPUs, compares them to H200 generation with regards to the memory subsystem, tensor core pipeline and floating-point precisions (FP32, FP16, FP8, FP6, FP4). Our systematic evaluation of dense/sparse GEMM, transformer inference, and training workloads demonstrate that B200's tensor core enhancements achieves 1.56x higher mixed-precision throughput and 42% better energy efficiency than H200. Our memory analysis reveals 58% reduction in memory access latency in cache-misses, fundamentally changing optimal algorithm design strategies.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Hardware Architecture