Score: 1

Rank-1 LoRAs Encode Interpretable Reasoning Signals

Published: November 10, 2025 | arXiv ID: 2511.06739v1

By: Jake Ward, Paul Riechers, Adam Shai

Potential Business Impact:

Makes AI smarter with tiny, understandable changes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reasoning models leverage inference-time compute to significantly enhance the performance of language models on difficult logical tasks, and have become a dominating paradigm in frontier LLMs. Despite their wide adoption, the mechanisms underpinning the enhanced performance of these reasoning models are not well understood. In this work, we show that the majority of new capabilities in reasoning models can be elicited by small, single-rank changes to base model parameters, with many of these changes being interpretable. Specifically, we use a rank-1 LoRA to create a minimal parameter adapter for Qwen-2.5-32B-Instruct which recovers 73-90% of reasoning-benchmark performance compared to a full parameter finetune. We find that the activations of this LoRA are as interpretable as MLP neurons, and fire for reasoning-specific behaviors. Finally, we train a sparse autoencoder on the entire activation state of this LoRA and identify fine-grained and monosemantic features. Our findings highlight that reasoning performance can arise largely from minimal changes to base model parameters, and explore what these changes affect. More broadly, our work shows that parameter-efficient training methods can be used as a targeted lens for uncovering fundamental insights about language model behavior and dynamics.

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)