Score: 0

Multi-turn Training with Basic Human Feedback Helps Little on LLM Reasoning

Published: October 24, 2025 | arXiv ID: 2510.21339v2

By: Qiang Liu , Wuganjing Song , Zhenzhou Lin and more

Potential Business Impact:

Simple training works best for AI reasoning.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The reasoning capabilities of Large Language Models (LLMs) are typically developed through the single-turn reinforcement learning, whereas real-world applications often involve multi-turn interactions with human feedback, leading to a potential mismatch between training and deployment conditions. In this work, we study whether multi-turn training with human feedback is necessary for reasoning tasks. We compare conventional single-turn training with three multi-turn strategies and reach contrary conclusions to previous research. We find that models trained in a single-turn setting generalize effectively to both single- and multi-turn evaluations, while models trained with multi-turn strategies exhibit a significant degradation in single-turn reasoning performance. These results suggest that for tasks with complete information, robust single-turn training remains more effective and reliable, as multi-turn training with basic feedback provides limited benefits and can even degrade reasoning capabilities.

Country of Origin
🇭🇰 Hong Kong

Page Count
10 pages

Category
Computer Science:
Computation and Language