Goal Alignment in LLM-Based User Simulators for Conversational AI
By: Shuhaib Mehri , Xiaocheng Yang , Takyoung Kim and more
Potential Business Impact:
Makes chatbots stick to their goals.
User simulators are essential to conversational AI, enabling scalable agent development and evaluation through simulated interactions. While current Large Language Models (LLMs) have advanced user simulation capabilities, we reveal that they struggle to consistently demonstrate goal-oriented behavior across multi-turn conversations--a critical limitation that compromises their reliability in downstream applications. We introduce User Goal State Tracking (UGST), a novel framework that tracks user goal progression throughout conversations. Leveraging UGST, we present a three-stage methodology for developing user simulators that can autonomously track goal progression and reason to generate goal-aligned responses. Moreover, we establish comprehensive evaluation metrics for measuring goal alignment in user simulators, and demonstrate that our approach yields substantial improvements across two benchmarks (MultiWOZ 2.4 and {\tau}-Bench). Our contributions address a critical gap in conversational AI and establish UGST as an essential framework for developing goal-aligned user simulators.
Similar Papers
The Indispensable Role of User Simulation in the Pursuit of AGI
Artificial Intelligence
Makes AI learn faster by pretending to be people.
OnGoal: Tracking and Visualizing Conversational Goals in Multi-Turn Dialogue with Large Language Models
Human-Computer Interaction
Helps people talk to computers better.
LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training
Computation and Language
Creates fake computer actions to train robots.