Score: 1

Adversarial Training for Failure-Sensitive User Simulation in Mental Health Dialogue Optimization

Published: December 23, 2025 | arXiv ID: 2512.20773v1

By: Ziyi Zhu , Olivier Tieleman , Caitlin A. Stamatis and more

Potential Business Impact:

Teaches chatbots to find problems in mental health talks.

Business Areas:
Simulation Software

Realistic user simulation is crucial for training and evaluating task-oriented dialogue (TOD) systems, yet creating simulators that accurately replicate human behavior remains challenging. A key property of effective simulators is their ability to expose failure modes of the systems they evaluate. We present an adversarial training framework that iteratively improves user simulator realism through a competitive dynamic between a generator (user simulator) and a discriminator. Applied to mental health support chatbots, our approach demonstrates that fine-tuned simulators dramatically outperform zero-shot base models at surfacing system issues, and adversarial training further enhances diversity, distributional alignment, and predictive validity. The resulting simulator achieves a strong correlation between simulated and real failure occurrence rates across diverse chatbot configurations while maintaining low distributional divergence of failure modes. Discriminator accuracy decreases drastically after three adversarial iterations, suggesting improved realism. These results provide evidence that adversarial training is a promising approach for creating realistic user simulators in mental health support TOD domains, enabling rapid, reliable, and cost-effective system evaluation before deployment.

Page Count
17 pages

Category
Computer Science:
Computation and Language