Score: 0

Globally optimized SVD compression of LLMs via Fermi-function-based rank selection and gauge fixing

Published: November 26, 2025 | arXiv ID: 2512.03062v1

By: Roman Rausch , David Jansen , Sukhbinder Singh and more

Potential Business Impact:

Makes big computer brains smaller and faster.

Business Areas:
Semantic Web Internet Services

Large Language Models (LLMs) are very demanding in terms of their computational resources. Low-rank decompositions of LLM weights, e.g. via Singular Value Decomposition (SVD), is a promising approach for LLM compression, but presents several practical hurdles, e.g. selecting appropriate layer-wise ranks and getting rid of its parameter redundancy. In this work, we present two physics-inspired improvements to SVD LLM compression: (1) \textbf{FermiGrad}, a gradient-descent algorithm that determines globally optimal layer-wise ranks by relaxing the discrete singular-value truncation into a continuous optimization using the Fermi function; (2) \textbf{PivGa}, an additional \textit{lossless} compression of the low-rank factors that exploits the intrinsic gauge freedom in their parametrization.

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)