Characterizing the Behavior of Training Mamba-based State Space Models on GPUs
By: Trinayan Baruah , Kaustubh Shivdikar , Sara Prescott and more
Potential Business Impact:
Makes AI faster at understanding long texts.
Mamba-based State Space Models (SSM) have emerged as a promising alternative to the ubiquitous transformers. Despite the expressive power of transformers, the quadratic complexity of computing attention is a major impediment to scaling performance as we increase the sequence length. SSMs provide an alternative path that addresses this problem, reducing the computational complexity requirements of self-attention with novel model architectures for different domains and fields such as video, text generation and graphs. Thus, it is important to characterize the behavior of these emerging workloads on GPUs and understand their requirements during GPU microarchitectural design. In this work we evaluate Mamba-based SSMs and characterize their behavior during training on GPUs. We construct a workload suite that offers representative models that span different model architectures. We then use this suite to analyze the architectural implications of running Mamba-based SSMs on GPUs. Our work sheds new light on potential optimizations to continue scaling the performance for such models.
Similar Papers
PerfMamba: Performance Analysis and Pruning of Selective State Space Models
Machine Learning (CS)
Makes computer models run faster and use less memory.
X-VMamba: Explainable Vision Mamba
CV and Pattern Recognition
Shows how computer vision "sees" medical images.
Understanding and Enhancing Mamba-Transformer Hybrids for Memory Recall and Language Modeling
Computation and Language
Makes AI understand long stories better.