ScaleFormer: Span Representation Cumulation for Long-Context Transformer
By: Jiangshu Du, Wenpeng Yin, Philip Yu
Potential Business Impact:
Lets computers understand long stories better.
The quadratic complexity of standard self-attention severely limits the application of Transformer-based models to long-context tasks. While efficient Transformer variants exist, they often require architectural changes and costly pre-training from scratch. To circumvent this, we propose ScaleFormer(Span Representation Cumulation for Long-Context Transformer) - a simple and effective plug-and-play framework that adapts off-the-shelf pre-trained encoder-decoder models to process long sequences without requiring architectural modifications. Our approach segments long inputs into overlapping chunks and generates a compressed, context-aware representation for the decoder. The core of our method is a novel, parameter-free fusion mechanism that endows each chunk's representation with structural awareness of its position within the document. It achieves this by enriching each chunk's boundary representations with cumulative context vectors from all preceding and succeeding chunks. This strategy provides the model with a strong signal of the document's narrative flow, achieves linear complexity, and enables pre-trained models to reason effectively over long-form text. Experiments on long-document summarization show that our method is highly competitive with and often outperforms state-of-the-art approaches without requiring architectural modifications or external retrieval mechanisms.
Similar Papers
CacheFormer: High Attention-Based Segment Caching
Machine Learning (CS)
Helps computers understand long stories better.
Understanding and Improving Length Generalization in Hierarchical Sparse Attention Models
Computation and Language
Lets computers understand much longer stories.
GContextFormer: A global context-aware hybrid multi-head attention approach with scaled additive aggregation for multimodal trajectory prediction
Artificial Intelligence
Helps cars predict where other cars will go.