Score: 0

Mamba-X: An End-to-End Vision Mamba Accelerator for Edge Computing Devices

Published: August 5, 2025 | arXiv ID: 2508.02977v1

By: Dongho Yoon , Gungyu Lee , Jaewon Chang and more

Potential Business Impact:

Makes AI see better on small devices.

Transformers have proven effective in language modeling but are limited by high computational and memory demands that grow quadratically with input sequence length. State space models (SSMs) offer a promising alternative by reducing attention complexity from $O(L^2)$ to $O(L)$ while also lowering overall memory consumption. Vision Mamba adapts the SSM approach for computer vision tasks, achieving lower latency and memory consumption than traditional transformer models. However, deploying Vision Mamba on edge devices is challenging due to its sequential scan operations, which hinder GPU efficiency. We propose Mamba-X, an end-to-end Vision Mamba accelerator that includes a systolic scan array to maximize parallelism and minimize memory traffic, along with a hybrid, hardware-friendly quantization technique to reduce memory usage and improve hardware efficiency without sacrificing accuracy.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
14 pages

Category
Computer Science:
Hardware Architecture