Effectiveness of Chain-of-Thought in Distilling Reasoning Capability from Large Language Models
By: Cong-Thanh Do, Rama Doddipatla, Kate Knill
Potential Business Impact:
Teaches small computers to think like big ones.
Chain-of-Thought (CoT) prompting is a widely used method to improve the reasoning capability of Large Language Models (LLMs). More recently, CoT has been leveraged in Knowledge Distillation (KD) to transfer reasoning capability from a larger LLM to a smaller one. This paper examines the role of CoT in distilling the reasoning capability from larger LLMs to smaller LLMs using white-box KD, analysing its effectiveness in improving the performance of the distilled models for various natural language reasoning and understanding tasks. We conduct white-box KD experiments using LLMs from the Qwen and Llama2 families, employing CoT data from the CoT-Collection dataset. The distilled models are then evaluated on natural language reasoning and understanding tasks from the BIG-Bench-Hard (BBH) benchmark, which presents complex challenges for smaller LLMs. Experimental results demonstrate the role of CoT in improving white-box KD effectiveness, enabling the distilled models to achieve better average performance in natural language reasoning and understanding tasks from BBH.
Similar Papers
Chain-of-Conceptual-Thought: Eliciting the Agent to Deeply Think within the Response
Computation and Language
Helps AI understand feelings and give better advice.
From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models
Computation and Language
Helps AI "think step-by-step" to solve harder problems.
From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models
Computation and Language
Helps AI "think" step-by-step to solve harder problems.