Sequences of Logits Reveal the Low Rank Structure of Language Models
By: Noah Golowich, Allen Liu, Abhishek Shetty
Potential Business Impact:
Finds hidden patterns to make AI write better.
A major problem in the study of large language models is to understand their inherent low-dimensional structure. We introduce an approach to study the low-dimensional structure of language models at a model-agnostic level: as sequential probabilistic models. We first empirically demonstrate that a wide range of modern language models exhibit low-rank structure: in particular, matrices built from the model's logits for varying sets of prompts and responses have low approximate rank. We then show that this low-rank structure can be leveraged for generation -- in particular, we can generate a response to a target prompt using a linear combination of the model's outputs on unrelated, or even nonsensical prompts. On the theoretical front, we observe that studying the approximate rank of language models in the sense discussed above yields a simple universal abstraction whose theoretical predictions parallel our experiments. We then analyze the representation power of the abstraction and give provable learning guarantees.
Similar Papers
Provably Learning from Modern Language Models via Low Logit Rank
Machine Learning (CS)
Teaches computers to learn language better.
Last Layer Logits to Logic: Empowering LLMs with Logic-Consistent Structured Knowledge Reasoning
Computation and Language
Fixes computer "thinking" to be more logical.
Evaluating the Use of Large Language Models as Synthetic Social Agents in Social Science Research
Artificial Intelligence
Makes AI better at guessing, not knowing for sure.