Inceptive Transformers: Enhancing Contextual Representations through Multi-Scale Feature Learning Across Domains and Languages
By: Asif Shahriar, Rifat Shahriyar, M Saifur Rahman
Potential Business Impact:
Makes AI understand details better by adding local views.
Encoder transformer models compress information from all tokens in a sequence into a single [CLS] token to represent global context. This approach risks diluting fine-grained or hierarchical features, leading to information loss in downstream tasks where local patterns are important. To remedy this, we propose a lightweight architectural enhancement: an inception-style 1-D convolution module that sits on top of the transformer layer and augments token representations with multi-scale local features. This enriched feature space is then processed by a self-attention layer that dynamically weights tokens based on their task relevance. Experiments on five diverse tasks show that our framework consistently improves general-purpose, domain-specific, and multilingual models, outperforming baselines by 1% to 14% while maintaining efficiency. Ablation studies show that multi-scale convolution performs better than any single kernel and that the self-attention layer is critical for performance.
Similar Papers
Conv-like Scale-Fusion Time Series Transformer: A Multi-Scale Representation for Variable-Length Long Time Series
Machine Learning (CS)
Helps computers predict future events better.
Multi-Level Attention and Contrastive Learning for Enhanced Text Classification with an Optimized Transformer
Computation and Language
Helps computers understand text meaning better.
KunlunBaize: LLM with Multi-Scale Convolution and Multi-Token Prediction Under TransformerX Framework
Computation and Language
Makes AI understand and write text much faster.