Multi-turn Training with Basic Human Feedback Helps Little on LLM Reasoning
By: Qiang Liu , Wuganjing Song , Zhenzhou Lin and more
Potential Business Impact:
Simple training works best for AI reasoning.
The reasoning capabilities of Large Language Models (LLMs) are typically developed through the single-turn reinforcement learning, whereas real-world applications often involve multi-turn interactions with human feedback, leading to a potential mismatch between training and deployment conditions. In this work, we study whether multi-turn training with human feedback is necessary for reasoning tasks. We compare conventional single-turn training with three multi-turn strategies and reach contrary conclusions to previous research. We find that models trained in a single-turn setting generalize effectively to both single- and multi-turn evaluations, while models trained with multi-turn strategies exhibit a significant degradation in single-turn reasoning performance. These results suggest that for tasks with complete information, robust single-turn training remains more effective and reliable, as multi-turn training with basic feedback provides limited benefits and can even degrade reasoning capabilities.
Similar Papers
Multi-Turn Puzzles: Evaluating Interactive Reasoning and Strategic Dialogue in LLMs
Computation and Language
Tests AI's ability to talk and learn.
Multi-Turn Puzzles: Evaluating Interactive Reasoning and Strategic Dialogue in LLMs
Computation and Language
Tests computers on talking and solving tricky problems.
Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language Models
Computation and Language
Makes chatbots remember conversations better.