Score: 0

Goal Alignment in LLM-Based User Simulators for Conversational AI

Published: July 27, 2025 | arXiv ID: 2507.20152v1

By: Shuhaib Mehri , Xiaocheng Yang , Takyoung Kim and more

Potential Business Impact:

Makes chatbots stick to their goals.

User simulators are essential to conversational AI, enabling scalable agent development and evaluation through simulated interactions. While current Large Language Models (LLMs) have advanced user simulation capabilities, we reveal that they struggle to consistently demonstrate goal-oriented behavior across multi-turn conversations--a critical limitation that compromises their reliability in downstream applications. We introduce User Goal State Tracking (UGST), a novel framework that tracks user goal progression throughout conversations. Leveraging UGST, we present a three-stage methodology for developing user simulators that can autonomously track goal progression and reason to generate goal-aligned responses. Moreover, we establish comprehensive evaluation metrics for measuring goal alignment in user simulators, and demonstrate that our approach yields substantial improvements across two benchmarks (MultiWOZ 2.4 and {\tau}-Bench). Our contributions address a critical gap in conversational AI and establish UGST as an essential framework for developing goal-aligned user simulators.

Country of Origin
🇺🇸 United States

Page Count
23 pages

Category
Computer Science:
Computation and Language