Score: 0

Token Homogenization under Positional Bias

Published: August 23, 2025 | arXiv ID: 2508.17126v1

By: Viacheslav Yusupov , Danil Maksimov , Ameliia Alaeva and more

Potential Business Impact:

Makes AI understand words better by fixing how it sees them.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper investigates token homogenization - the convergence of token representations toward uniformity across transformer layers and its relationship to positional bias in large language models. We empirically examine whether homogenization occurs and how positional bias amplifies this effect. Through layer-wise similarity analysis and controlled experiments, we demonstrate that tokens systematically lose distinctiveness during processing, particularly when biased toward extremal positions. Our findings confirm both the existence of homogenization and its dependence on positional attention mechanisms.

Page Count
9 pages

Category
Computer Science:
Computation and Language