Score: 0

FOAM: Blocked State Folding for Memory-Efficient LLM Training

Published: December 8, 2025 | arXiv ID: 2512.07112v1

By: Ziqing Wen , Jiahuan Wang , Ping Luo and more

Potential Business Impact:

Makes AI training use half the computer memory.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have demonstrated remarkable performance due to their large parameter counts and extensive training data. However, their scale leads to significant memory bottlenecks during training, especially when using memory-intensive optimizers like Adam. Existing memory-efficient approaches often rely on techniques such as singular value decomposition (SVD), projections, or weight freezing, which can introduce substantial computational overhead, require additional memory for projections, or degrade model performance. In this paper, we propose Folded Optimizer with Approximate Moment (FOAM), a method that compresses optimizer states by computing block-wise gradient means and incorporates a residual correction to recover lost information. Theoretically, FOAM achieves convergence rates equivalent to vanilla Adam under standard non-convex optimization settings. Empirically, FOAM reduces total training memory by approximately 50\%, eliminates up to 90\% of optimizer state memory overhead, and accelerates convergence. Furthermore, FOAM is compatible with other memory-efficient optimizers, delivering performance and throughput that match or surpass both full-rank and existing memory-efficient baselines.

Page Count
25 pages

Category
Computer Science:
Machine Learning (CS)