TrainVerify: Equivalence-Based Verification for Distributed LLM Training
By: Yunchi Lu , Youshan Miao , Cheng Tan and more
Potential Business Impact:
Checks if AI training is done right.
Training large language models (LLMs) at scale requires parallel execution across thousands of devices, incurring enormous computational costs. Yet, these costly distributed trainings are rarely verified, leaving them prone to silent errors and potentially wasting millions of GPU hours. We introduce TrainVerify, a system for verifiable distributed training of LLMs. Given a deep learning model's logical specification as the ground truth, TrainVerify formally verifies that a distributed parallel execution plan is mathematically equivalent to it. Direct verification is notoriously difficult due to the sheer scale of LLMs which often involves billions of variables and highly intricate computation graphs. Therefore, TrainVerify introduces shape-reduction techniques and a stage-wise parallel verification algorithm that significantly reduces complexity while preserving formal correctness. TrainVerify scales to frontier LLMs, including the successful verification of the Llama3 (405B) and DeepSeek-V3 (671B) training plans.
Similar Papers
Verification Limits Code LLM Training
Software Engineering
Makes AI write better computer code by fixing tests.
Tractable Asymmetric Verification for Large Language Models via Deterministic Replicability
Artificial Intelligence
Checks if AI is telling the truth.
xVerify: Efficient Answer Verifier for Reasoning Model Evaluations
Computation and Language
Checks if AI's complex answers are correct.