Compressing Large Language Models with PCA Without Performance Loss
By: Magnus Bengtsson
Potential Business Impact:
Makes computer models much smaller, work just as well.
We demonstrate that Principal Component Analysis (PCA), when applied in a structured manner, either to polar-transformed images or segment-wise to token sequences, enables extreme compression of neural models without sacrificing performance. Across three case studies, we show that a one-layer classifier trained on PCA-compressed polar MNIST achieves over 98 percent accuracy using only 840 parameters. A two-layer transformer trained on 70-dimensional PCA-reduced MiniLM embeddings reaches 76.62 percent accuracy on the 20 Newsgroups dataset with just 81000 parameters. A decoder-only transformer generates coherent token sequences from 70-dimensional PCA embeddings while preserving over 97 percent cosine similarity with full MiniLM representations, using less than 17 percent of the parameter count of GPT-2. These results highlight PCA-based input compression as a general and effective strategy for aligning model capacity with information content, enabling lightweight architectures across multiple modalities.
Similar Papers
"Principal Components" Enable A New Language of Images
CV and Pattern Recognition
Makes computers understand pictures better and faster.
PCA-RAG: Principal Component Analysis for Efficient Retrieval-Augmented Generation
Machine Learning (CS)
Makes smart computer answers faster and smaller.
Highly robust factored principal component analysis for matrix-valued outlier accommodation and explainable detection via matrix minimum covariance determinant
Methodology
Finds bad data points in complex pictures.