Score: 0

TrainVerify: Equivalence-Based Verification for Distributed LLM Training

Published: June 19, 2025 | arXiv ID: 2506.15961v2

By: Yunchi Lu , Youshan Miao , Cheng Tan and more

Potential Business Impact:

Checks if AI training is done right.

Business Areas:
A/B Testing Data and Analytics

Training large language models (LLMs) at scale requires parallel execution across thousands of devices, incurring enormous computational costs. Yet, these costly distributed trainings are rarely verified, leaving them prone to silent errors and potentially wasting millions of GPU hours. We introduce TrainVerify, a system for verifiable distributed training of LLMs. Given a deep learning model's logical specification as the ground truth, TrainVerify formally verifies that a distributed parallel execution plan is mathematically equivalent to it. Direct verification is notoriously difficult due to the sheer scale of LLMs which often involves billions of variables and highly intricate computation graphs. Therefore, TrainVerify introduces shape-reduction techniques and a stage-wise parallel verification algorithm that significantly reduces complexity while preserving formal correctness. TrainVerify scales to frontier LLMs, including the successful verification of the Llama3 (405B) and DeepSeek-V3 (671B) training plans.

Page Count
21 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing