Score: 1

The Universal Weight Subspace Hypothesis

Published: December 4, 2025 | arXiv ID: 2512.05117v1

By: Prakhar Kaushik , Shravan Chaudhari , Ankit Vaidya and more

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Finds hidden patterns in AI brains.

Business Areas:
Semantic Web Internet Services

We show that deep neural networks trained across diverse tasks exhibit remarkably similar low-dimensional parametric subspaces. We provide the first large-scale empirical evidence that demonstrates that neural networks systematically converge to shared spectral subspaces regardless of initialization, task, or domain. Through mode-wise spectral analysis of over 1100 models - including 500 Mistral-7B LoRAs, 500 Vision Transformers, and 50 LLaMA-8B models - we identify universal subspaces capturing majority variance in just a few principal directions. By applying spectral decomposition techniques to the weight matrices of various architectures trained on a wide range of tasks and datasets, we identify sparse, joint subspaces that are consistently exploited, within shared architectures across diverse tasks and datasets. Our findings offer new insights into the intrinsic organization of information within deep networks and raise important questions about the possibility of discovering these universal subspaces without the need for extensive data and computational resources. Furthermore, this inherent structure has significant implications for model reusability, multi-task learning, model merging, and the development of training and inference-efficient algorithms, potentially reducing the carbon footprint of large-scale neural models.

Country of Origin
🇺🇸 United States

Page Count
37 pages

Category
Computer Science:
Machine Learning (CS)