Score: 1

Mitigating Posterior Salience Attenuation in Long-Context LLMs with Positional Contrastive Decoding

Published: June 10, 2025 | arXiv ID: 2506.08371v2

By: Zikai Xiao , Ziyang Wang , Wen Ma and more

Potential Business Impact:

Makes AI remember more of long stories.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

While Large Language Models (LLMs) support long contexts, they struggle with performance degradation within the context window. Current solutions incur prohibitive training costs, leaving statistical behaviors and cost-effective approaches underexplored. From the decoding perspective, we identify the Posterior Salience Attenuation (PSA) phenomenon, where the salience ratio correlates with long-text performance degradation. Notably, despite the attenuation, gold tokens still occupy high-ranking positions in the decoding space. Motivated by it, we propose the training-free Positional Contrastive Decoding (PCD) that contrasts the logits derived from long-aware attention with those from designed local-aware attention, enabling the model to focus on the gains introduced by large-scale short-to-long training. Through the analysis of long-term decay simulation, we demonstrate that PCD effectively alleviates attention score degradation. Experimental results show that PCD achieves state-of-the-art performance on long-context benchmarks.

Country of Origin
🇨🇳 China

Page Count
10 pages

Category
Computer Science:
Computation and Language