From Interpretability to Performance: Optimizing Retrieval Heads for Long-Context Language Models
By: Youmi Ma, Naoaki Okazaki
Potential Business Impact:
Makes AI remember more and answer better.
Advances in mechanistic interpretability have identified special attention heads, known as retrieval heads, that are responsible for retrieving information from the context. However, the role of these retrieval heads in improving model performance remains unexplored. This work investigates whether retrieval heads can be leveraged to enhance the long-context capabilities of LLMs. Specifically, we propose RetMask, a method that generates training signals by contrasting normal model outputs with those from an ablated variant in which the retrieval heads are masked. This mechanism-based approach achieves substantial improvements: +2.28 points on HELMET at 128K for Llama-3.1, with +70% gains on generation with citation and +32% on passage re-ranking, while preserving performance on general tasks. Experiments across three model families reveal that the effectiveness depends on retrieval head organization: models with concentrated patterns of retrieval heads respond strongly, while those with distributed patterns show limited gains. This mechanistic relationship validates the function of retrieval heads and demonstrates that mechanistic insights can be transformed into performance enhancements.
Similar Papers
Query-Focused Retrieval Heads Improve Long-Context Reasoning and Re-ranking
Computation and Language
Helps computers find important info in long texts.
Interpreting and Mitigating Unwanted Uncertainty in LLMs
Computation and Language
Fixes AI answers so they stay correct.
Context Length Alone Hurts LLM Performance Despite Perfect Retrieval
Computation and Language
Makes computers understand long stories better.