MIND: From Passive Mimicry to Active Reasoning through Capability-Aware Multi-Perspective CoT Distillation
By: Jin Cui , Jiaqi Guo , Jiepeng Zhou and more
Potential Business Impact:
Teaches small computers big thinking skills.
While Large Language Models (LLMs) have emerged with remarkable capabilities in complex tasks through Chain-of-Thought reasoning, practical resource constraints have sparked interest in transferring these abilities to smaller models. However, achieving both domain performance and cross-domain generalization remains challenging. Existing approaches typically restrict students to following a single golden rationale and treat different reasoning paths independently. Due to distinct inductive biases and intrinsic preferences, alongside the student's evolving capacity and reasoning preferences during training, a teacher's "optimal" rationale could act as out-of-distribution noise. This misalignment leads to a degeneration of the student's latent reasoning distribution, causing suboptimal performance. To bridge this gap, we propose MIND, a capability-adaptive framework that transitions distillation from passive mimicry to active cognitive construction. We synthesize diverse teacher perspectives through a novel "Teaching Assistant" network. By employing a Feedback-Driven Inertia Calibration mechanism, this network utilizes inertia-filtered training loss to align supervision with the student's current adaptability, effectively enhancing performance while mitigating catastrophic forgetting. Extensive experiments demonstrate that MIND achieves state-of-the-art performance on both in-distribution and out-of-distribution benchmarks, and our sophisticated latent space analysis further confirms the mechanism of reasoning ability internalization.
Similar Papers
MIND: Multi-rationale INtegrated Discriminative Reasoning Framework for Multi-modal Large Models
Artificial Intelligence
Helps AI think and fix its own mistakes.
A MIND for Reasoning: Meta-learning for In-context Deduction
Computation and Language
Helps small AI learn to reason better.
Reasoning Within the Mind: Dynamic Multimodal Interleaving in Latent Space
CV and Pattern Recognition
Helps computers "think" better by mixing words and pictures.