M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models
By: Junxiong Wang , Wen-Ding Li , Daniele Paliotta and more
Potential Business Impact:
Makes math problem-solving computers faster and smarter.
Effective reasoning is crucial to solving complex mathematical problems. Recent large language models (LLMs) have boosted performance by scaling test-time computation through long chain-of-thought reasoning. However, transformer-based models are inherently limited in extending context length due to their quadratic computational complexity and linear memory requirements. In this paper, we introduce a novel hybrid linear RNN reasoning model, M1, built on the Mamba architecture, which allows memory-efficient inference. Our approach leverages a distillation process from existing reasoning models and is further enhanced through RL training. Experimental results on the AIME and MATH benchmarks show that M1 not only outperforms previous linear RNN models but also matches the performance of state-of-the-art Deepseek R1 distilled reasoning models at a similar scale. We also compare our generation speed with a highly performant general purpose inference engine, vLLM, and observe more than a 3x speedup compared to a same size transformer. With throughput speedup, we are able to achieve higher accuracy compared to DeepSeek R1 distilled transformer reasoning models under a fixed generation time budget using self-consistency voting. Overall, we introduce a hybrid Mamba reasoning model and provide a more effective approach to scaling test-time generation using self-consistency or long chain of thought reasoning.
Similar Papers
Thinking Slow, Fast: Scaling Inference Compute with Distilled Reasoners
Computation and Language
Faster AI can solve math problems better.
Apriel-H1: Towards Efficient Enterprise Reasoning Models
Machine Learning (CS)
Makes smart computer programs run much faster.
Scaling Reasoning without Attention
Machine Learning (CS)
Makes computers think smarter and faster.