From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models
By: Wenxin Zhu , Andong Chen , Yuchen Song and more
Potential Business Impact:
Helps AI "think step-by-step" to solve harder problems.
With the remarkable success of Multimodal Large Language Models (MLLMs) in perception tasks, enhancing their complex reasoning capabilities has emerged as a critical research focus. Existing models still suffer from challenges such as opaque reasoning paths and insufficient generalization ability. Chain-of-Thought (CoT) reasoning, which has demonstrated significant efficacy in language models by enhancing reasoning transparency and output interpretability, holds promise for improving model reasoning capabilities when extended to the multimodal domain. This paper provides a systematic review centered on "Multimodal Chain-of-Thought" (MCoT). First, it analyzes the background and theoretical motivations for its inception from the perspectives of technical evolution and task demands. Then, it introduces mainstream MCoT methods from three aspects: CoT paradigms, the post-training stage, and the inference stage, while also analyzing their underlying mechanisms. Furthermore, the paper summarizes existing evaluation benchmarks and metrics, and discusses the application scenarios of MCoT. Finally, it analyzes the challenges currently facing MCoT and provides an outlook on its future research directions.
Similar Papers
From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models
Computation and Language
Helps AI "think" step-by-step to solve harder problems.
Towards Reasoning Era: A Survey of Long Chain-of-Thought for Reasoning Large Language Models
Artificial Intelligence
Makes computers think deeper to solve hard problems.
Effectiveness of Chain-of-Thought in Distilling Reasoning Capability from Large Language Models
Computation and Language
Teaches small computers to think like big ones.