MiMo: Unlocking the Reasoning Potential of Language Model -- From Pretraining to Posttraining
By: LLM-Core Xiaomi , : , Bingquan Xia and more
Potential Business Impact:
Helps computers solve math and code problems.
We present MiMo-7B, a large language model born for reasoning tasks, with optimization across both pre-training and post-training stages. During pre-training, we enhance the data preprocessing pipeline and employ a three-stage data mixing strategy to strengthen the base model's reasoning potential. MiMo-7B-Base is pre-trained on 25 trillion tokens, with additional Multi-Token Prediction objective for enhanced performance and accelerated inference speed. During post-training, we curate a dataset of 130K verifiable mathematics and programming problems for reinforcement learning, integrating a test-difficulty-driven code-reward scheme to alleviate sparse-reward issues and employing strategic data resampling to stabilize training. Extensive evaluations show that MiMo-7B-Base possesses exceptional reasoning potential, outperforming even much larger 32B models. The final RL-tuned model, MiMo-7B-RL, achieves superior performance on mathematics, code and general reasoning tasks, surpassing the performance of OpenAI o1-mini. The model checkpoints are available at https://github.com/xiaomimimo/MiMo.
Similar Papers
MiMo-VL Technical Report
Computation and Language
Helps computers understand pictures and words better.
LMM-R1: Empowering 3B LMMs with Strong Reasoning Abilities Through Two-Stage Rule-Based RL
Computation and Language
Makes AI better at thinking with pictures and words.
Advancing Multimodal Reasoning: From Optimized Cold Start to Staged Reinforcement Learning
Machine Learning (CS)
Teaches AI to solve hard math and logic problems.