Understanding and Enhancing Mamba-Transformer Hybrids for Memory Recall and Language Modeling
By: Hyunji Lee , Wenhao Yu , Hongming Zhang and more
Potential Business Impact:
Makes AI understand long stories better.
Hybrid models that combine state space models (SSMs) with attention mechanisms have shown strong performance by leveraging the efficiency of SSMs and the high recall ability of attention. However, the architectural design choices behind these hybrid models remain insufficiently understood. In this work, we analyze hybrid architectures through the lens of memory utilization and overall performance, and propose a complementary method to further enhance their effectiveness. We first examine the distinction between sequential and parallel integration of SSM and attention layers. Our analysis reveals several interesting findings, including that sequential hybrids perform better on shorter contexts, whereas parallel hybrids are more effective for longer contexts. We also introduce a data-centric approach of continually training on datasets augmented with paraphrases, which further enhances recall while preserving other capabilities. It generalizes well across different base models and outperforms architectural modifications aimed at enhancing recall. Our findings provide a deeper understanding of hybrid SSM-attention models and offer practical guidance for designing architectures tailored to various use cases. Our findings provide a deeper understanding of hybrid SSM-attention models and offer practical guidance for designing architectures tailored to various use cases.
Similar Papers
Hybrid Architectures for Language Models: Systematic Analysis and Design Insights
Computation and Language
Makes AI understand long texts faster and better.
When recalling in-context, Transformers are not SSMs
Machine Learning (CS)
Makes AI better at remembering and understanding.
Characterizing the Behavior of Training Mamba-based State Space Models on GPUs
Machine Learning (CS)
Makes AI faster at understanding long texts.