Score: 0

Effectiveness of Chain-of-Thought in Distilling Reasoning Capability from Large Language Models

Published: November 7, 2025 | arXiv ID: 2511.05184v1

By: Cong-Thanh Do, Rama Doddipatla, Kate Knill

Potential Business Impact:

Teaches small computers to think like big ones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Chain-of-Thought (CoT) prompting is a widely used method to improve the reasoning capability of Large Language Models (LLMs). More recently, CoT has been leveraged in Knowledge Distillation (KD) to transfer reasoning capability from a larger LLM to a smaller one. This paper examines the role of CoT in distilling the reasoning capability from larger LLMs to smaller LLMs using white-box KD, analysing its effectiveness in improving the performance of the distilled models for various natural language reasoning and understanding tasks. We conduct white-box KD experiments using LLMs from the Qwen and Llama2 families, employing CoT data from the CoT-Collection dataset. The distilled models are then evaluated on natural language reasoning and understanding tasks from the BIG-Bench-Hard (BBH) benchmark, which presents complex challenges for smaller LLMs. Experimental results demonstrate the role of CoT in improving white-box KD effectiveness, enabling the distilled models to achieve better average performance in natural language reasoning and understanding tasks from BBH.

Page Count
13 pages

Category
Computer Science:
Computation and Language