Score: 1

Layer-wise Positional Bias in Short-Context Language Modeling

Published: January 7, 2026 | arXiv ID: 2601.04098v1

By: Maryam Rahimi, Mahdi Nouri, Yadollah Yaghoobzadeh

Potential Business Impact:

Shows how computer language learning changes over time.

Business Areas:
Semantic Search Internet Services

Language models often show a preference for using information from specific positions in the input regardless of semantic relevance. While positional bias has been studied in various contexts, from attention sinks to task performance degradation in long-context settings, prior work has not established how these biases evolve across individual layers and input positions, or how they vary independent of task complexity. We introduce an attribution-based framework to analyze positional effects in short-context language modeling. Using layer conductance with a sliding-window approach, we quantify how each layer distributes importance across input positions, yielding layer-wise positional importance profiles. We find that these profiles are architecture-specific, stable across inputs, and invariant to lexical scrambling. Characterizing these profiles, we find prominent recency bias that increases with depth and subtle primacy bias that diminishes through model depth. Beyond positional structure, we also show that early layers preferentially weight content words over function words across all positions, while later layers lose this word-type differentiation.

Country of Origin
🇮🇷 Iran

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Computation and Language