Long-Short Alignment for Effective Long-Context Modeling in LLMs
By: Tianqi Du , Haotian Huang , Yifei Wang and more
Potential Business Impact:
Makes AI remember more of what you say.
Large language models (LLMs) have exhibited impressive performance and surprising emergent properties. However, their effectiveness remains limited by the fixed context window of the transformer architecture, posing challenges for long-context modeling. Among these challenges, length generalization -- the ability to generalize to sequences longer than those seen during training -- is a classical and fundamental problem. In this work, we propose a fresh perspective on length generalization, shifting the focus from the conventional emphasis on input features such as positional encodings or data structures to the output distribution of the model. Specifically, through case studies on synthetic tasks, we highlight the critical role of \textbf{long-short alignment} -- the consistency of output distributions across sequences of varying lengths. Extending this insight to natural language tasks, we propose a metric called Long-Short Misalignment to quantify this phenomenon, uncovering a strong correlation between the metric and length generalization performance. Building on these findings, we develop a regularization term that promotes long-short alignment during training. Extensive experiments validate the effectiveness of our approach, offering new insights for achieving more effective long-context modeling in LLMs. Code is available at https://github.com/PKU-ML/LongShortAlignment.
Similar Papers
Shifting Long-Context LLMs Research from Input to Output
Computation and Language
Helps computers write long, smart stories.
A Survey on Transformer Context Extension: Approaches and Evaluation
Computation and Language
Helps computers understand long stories better.
From 128K to 4M: Efficient Training of Ultra-Long Context Large Language Models
Computation and Language
Lets computers understand much longer stories.