Score: 2

Sequences of Logits Reveal the Low Rank Structure of Language Models

Published: October 28, 2025 | arXiv ID: 2510.24966v1

By: Noah Golowich, Allen Liu, Abhishek Shetty

BigTech Affiliations: University of California, Berkeley Massachusetts Institute of Technology

Potential Business Impact:

Finds hidden patterns to make AI write better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

A major problem in the study of large language models is to understand their inherent low-dimensional structure. We introduce an approach to study the low-dimensional structure of language models at a model-agnostic level: as sequential probabilistic models. We first empirically demonstrate that a wide range of modern language models exhibit low-rank structure: in particular, matrices built from the model's logits for varying sets of prompts and responses have low approximate rank. We then show that this low-rank structure can be leveraged for generation -- in particular, we can generate a response to a target prompt using a linear combination of the model's outputs on unrelated, or even nonsensical prompts. On the theoretical front, we observe that studying the approximate rank of language models in the sense discussed above yields a simple universal abstraction whose theoretical predictions parallel our experiments. We then analyze the representation power of the abstraction and give provable learning guarantees.

Country of Origin
🇺🇸 United States

Page Count
44 pages

Category
Computer Science:
Machine Learning (CS)