Mitigating Spurious Correlations Between Question and Answer via Chain-of-Thought Correctness Perception Distillation
By: Hongyan Xie , Yitong Yao , Yikun Ban and more
Potential Business Impact:
Teaches small AI to think better by fixing its mistakes.
Large language models (LLMs) excel at reasoning tasks but are expensive to deploy. Thus small language models (SLMs) are fine-tuned on CoT data generated by LLMs to copy LLMs' abilities. However, these CoT data may include noisy rationales that either fail to substantiate the answers or contribute no additional information to support answer prediction, which leads SLMs to capture spurious correlations between questions and answers and compromise the quality of reasoning. In this work, we propose Chain-of-Thought Correctness Perception Distillation (CoPeD), which aims to improve the reasoning quality of the student model from the perspectives of task setting and data utilization. Firstly, we introduce a correctness-aware task setting that encourages the student model to predict answers based on correct rationales and revise them when they are incorrect. This setting improves the faithfulness of reasoning and allows the model to learn from its mistakes. Then, we propose a Correctness-Aware Weighted loss, which dynamically adjusts the contribution of each training instance based on the combined loss of the rationale and the answer. This strategy encourages the model to focus more on samples where the rationale offers stronger support for the correct answer. Experiments have shown that CoPeD is effective on both in-distribution (IND) and out-of-distribution (OOD) benchmark reasoning datasets.
Similar Papers
Deconstructing Long Chain-of-Thought: A Structured Reasoning Optimization Framework for Long CoT Distillation
Artificial Intelligence
Teaches computers to think better, step-by-step.
Search-Based Correction of Reasoning Chains for Language Models
Machine Learning (CS)
Fixes AI mistakes in its thinking steps.
Effectiveness of Chain-of-Thought in Distilling Reasoning Capability from Large Language Models
Computation and Language
Teaches small computers to think like big ones.