ReviewInstruct: A Review-Driven Multi-Turn Conversations Generation Method for Large Language Models
By: Jiangxu Wu , Cong Wang , TianHuang Su and more
Potential Business Impact:
Makes AI chatbots better at talking back and forth.
The effectiveness of large language models (LLMs) in conversational AI is hindered by their reliance on single-turn supervised fine-tuning (SFT) data, which limits contextual coherence in multi-turn dialogues. Existing methods for generating multi-turn dialogue data struggle to ensure both diversity and quality in instructions. To address this, we propose Review-Instruct, a novel framework that synthesizes multi-turn conversations through an iterative "Ask-Respond-Review" process involving three agent roles: a Candidate, multiple Reviewers, and a Chairman. The framework iteratively refines instructions by incorporating Reviewer feedback, enhancing dialogue diversity and difficulty. We construct a multi-turn dataset using the Alpaca dataset and fine-tune the LLaMA2-13B model. Evaluations on MT-Bench, MMLU-Pro, and Auto-Arena demonstrate significant improvements, achieving absolute gains of 2.9\% on MMLU-Pro and 2\% on MT-Bench compared to prior state-of-the-art models based on LLaMA2-13B. Ablation studies confirm the critical role of the Review stage and the use of multiple Reviewers in boosting instruction diversity and difficulty. Our work highlights the potential of review-driven, multi-agent frameworks for generating high-quality conversational data at scale.
Similar Papers
Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language Models
Computation and Language
Makes chatbots remember conversations better.
Reviewriter: AI-Generated Instructions For Peer Review Writing
Human-Computer Interaction
Helps students write better peer reviews with AI.
Proactive Guidance of Multi-Turn Conversation in Industrial Search
Computation and Language
Helps search engines guess what you want next.