To Think or Not to Think: The Hidden Cost of Meta-Training with Excessive CoT Examples
By: Vignesh Kothapalli , Ata Fatahibaarzi , Hamed Firooz and more
Chain-of-thought (CoT) prompting combined with few-shot in-context learning (ICL) has unlocked significant reasoning capabilities in large language models (LLMs). However, ICL with CoT examples is ineffective on novel tasks when the pre-training knowledge is insufficient. We study this problem in a controlled setting using the CoT-ICL Lab framework, and propose meta-training techniques to learn novel abstract reasoning tasks in-context. Although CoT examples facilitate reasoning, we noticed that their excessive inclusion during meta-training degrades performance when CoT supervision is limited. To mitigate such behavior, we propose CoT-Recipe, a formal approach to modulate the mix of CoT and non-CoT examples in meta-training sequences. We demonstrate that careful modulation via CoT-Recipe can increase the accuracy of transformers on novel tasks by up to 300% even when there are no CoT examples available in-context. We confirm the broader effectiveness of these techniques by applying them to pretrained LLMs (Qwen2.5 series) for symbolic reasoning tasks and observing gains of up to 130% in accuracy.
Similar Papers
The Curse of CoT: On the Limitations of Chain-of-Thought in In-Context Learning
Computation and Language
Computers learn better by *not* explaining their steps.
Revisiting Chain-of-Thought Prompting: Zero-shot Can Be Stronger than Few-shot
Computation and Language
Stronger AI learns math without example steps.
From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models
Computation and Language
Helps AI "think step-by-step" to solve harder problems.