Score: 1

Empowering Lightweight MLLMs with Reasoning via Long CoT SFT

Published: September 3, 2025 | arXiv ID: 2509.03321v1

By: Linyu Ou

Potential Business Impact:

Teaches small AI to think better with examples.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

While Reinforcement Learning with Verifiable Rewards has enhanced the reasoning of large-scale language models (LLMs), its efficacy for lightweight multimodal language models (MLLMs) with fewer than seven billion parameters remains underexplored. This paper investigates the role of long Chain-of-Thought (long CoT) data in enhancing the reasoning abilities of such MLLMs. Our findings demonstrate that Supervised Fine-Tuning (SFT) with long CoT data significantly improves MLLM reasoning. Furthermore, we observe that after this initial SFT phase, MLLMs can achieve additional performance gains through a subsequent RL stage. We conclude that a SFT stage with long CoT data is a critical prerequisite for developing the reasoning capabilities of lightweight MLLMs.


Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition