Score: 1

RMT-KD: Random Matrix Theoretic Causal Knowledge Distillation

Published: September 19, 2025 | arXiv ID: 2509.15724v1

By: Davide Ettori , Nastaran Darabi , Sureshkumar Senthilkumar and more

Potential Business Impact:

Makes big AI models smaller and faster.

Business Areas:
A/B Testing Data and Analytics

Large deep learning models such as BERT and ResNet achieve state-of-the-art performance but are costly to deploy at the edge due to their size and compute demands. We present RMT-KD, a compression method that leverages Random Matrix Theory (RMT) for knowledge distillation to iteratively reduce network size. Instead of pruning or heuristic rank selection, RMT-KD preserves only informative directions identified via the spectral properties of hidden representations. RMT-based causal reduction is applied layer by layer with self-distillation to maintain stability and accuracy. On GLUE, AG News, and CIFAR-10, RMT-KD achieves up to 80% parameter reduction with only 2% accuracy loss, delivering 2.8x faster inference and nearly halved power consumption. These results establish RMT-KD as a mathematically grounded approach to network distillation.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)