Score: 0

Fusion Matters: Length-Aware Analysis of Positional-Encoding Fusion in Transformers

Published: January 9, 2026 | arXiv ID: 2601.05807v1

By: Mohamed Amine Hallam, Kuo-Kun Tseng

Potential Business Impact:

Improves AI understanding of long texts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Transformers require positional encodings to represent sequence order, yet most prior work focuses on designing new positional encodings rather than examining how positional information is fused with token embeddings. In this paper, we study whether the fusion mechanism itself affects performance, particularly in long-sequence settings. We conduct a controlled empirical study comparing three canonical fusion strategies--element-wise addition, concatenation with projection, and scalar gated fusion--under identical Transformer architectures, data splits, and random seeds. Experiments on three text classification datasets spanning short (AG News), medium (IMDB), and long (ArXiv) sequences show that fusion choice has negligible impact on short texts but produces consistent gains on long documents. To verify that these gains are structural rather than stochastic, we perform paired-seed analysis and cross-dataset comparison across sequence-length regimes. Additional experiments on the ArXiv dataset indicate that the benefit of learnable fusion generalizes across multiple positional encoding families. Finally, we explore a lightweight convolutional gating mechanism that introduces local inductive bias at the fusion level, evaluated on long documents only. Our results indicate that positional-encoding fusion is a non-trivial design choice for long-sequence Transformers and should be treated as an explicit modeling decision rather than a fixed default.

Country of Origin
šŸ‡ØšŸ‡³ China

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)