Score: 0

Compressing Large Language Models with PCA Without Performance Loss

Published: August 6, 2025 | arXiv ID: 2508.04307v1

By: Magnus Bengtsson

Potential Business Impact:

Makes computer models much smaller, work just as well.

We demonstrate that Principal Component Analysis (PCA), when applied in a structured manner, either to polar-transformed images or segment-wise to token sequences, enables extreme compression of neural models without sacrificing performance. Across three case studies, we show that a one-layer classifier trained on PCA-compressed polar MNIST achieves over 98 percent accuracy using only 840 parameters. A two-layer transformer trained on 70-dimensional PCA-reduced MiniLM embeddings reaches 76.62 percent accuracy on the 20 Newsgroups dataset with just 81000 parameters. A decoder-only transformer generates coherent token sequences from 70-dimensional PCA embeddings while preserving over 97 percent cosine similarity with full MiniLM representations, using less than 17 percent of the parameter count of GPT-2. These results highlight PCA-based input compression as a general and effective strategy for aligning model capacity with information content, enabling lightweight architectures across multiple modalities.

Country of Origin
πŸ‡ΈπŸ‡ͺ Sweden

Page Count
23 pages

Category
Computer Science:
Computational Engineering, Finance, and Science