Score: 1

Learning from Mistakes: Negative Reasoning Samples Enhance Out-of-Domain Generalization

Published: January 8, 2026 | arXiv ID: 2601.04992v2

By: Xueyun Tian , Minghua Ma , Bingbing Xu and more

Potential Business Impact:

Teaches computers to learn from mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Supervised fine-tuning (SFT) on chain-of-thought (CoT) trajectories demonstrations is a common approach for enabling reasoning in large language models. Standard practices typically only retain trajectories with correct final answers (positives) while ignoring the rest (negatives). We argue that this paradigm discards substantial supervision and exacerbates overfitting, limiting out-of-domain (OOD) generalization. Specifically, we surprisingly find that incorporating negative trajectories into SFT yields substantial OOD generalization gains over positive-only training, as these trajectories often retain valid intermediate reasoning despite incorrect final answers. To understand this effect in depth, we systematically analyze data, training dynamics, and inference behavior, identifying 22 recurring patterns in negative chains that serve a dual role: they moderate loss descent to mitigate overfitting during training and boost policy entropy by 35.67% during inference to facilitate exploration. Motivated by these observations, we further propose Gain-based LOss Weighting (GLOW), an adaptive, sample-aware scheme that exploits such distinctive training dynamics by rescaling per-sample loss based on inter-epoch progress. Empirically, GLOW efficiently leverages unfiltered trajectories, yielding a 5.51% OOD gain over positive-only SFT on Qwen2.5-7B and boosting MMLU from 72.82% to 76.47% as an RL initialization.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Computation and Language