Adversarial Training for Failure-Sensitive User Simulation in Mental Health Dialogue Optimization
By: Ziyi Zhu , Olivier Tieleman , Caitlin A. Stamatis and more
Potential Business Impact:
Teaches chatbots to find problems in mental health talks.
Realistic user simulation is crucial for training and evaluating task-oriented dialogue (TOD) systems, yet creating simulators that accurately replicate human behavior remains challenging. A key property of effective simulators is their ability to expose failure modes of the systems they evaluate. We present an adversarial training framework that iteratively improves user simulator realism through a competitive dynamic between a generator (user simulator) and a discriminator. Applied to mental health support chatbots, our approach demonstrates that fine-tuned simulators dramatically outperform zero-shot base models at surfacing system issues, and adversarial training further enhances diversity, distributional alignment, and predictive validity. The resulting simulator achieves a strong correlation between simulated and real failure occurrence rates across diverse chatbot configurations while maintaining low distributional divergence of failure modes. Discriminator accuracy decreases drastically after three adversarial iterations, suggesting improved realism. These results provide evidence that adversarial training is a promising approach for creating realistic user simulators in mental health support TOD domains, enabling rapid, reliable, and cost-effective system evaluation before deployment.
Similar Papers
Adversarial VR: An Open-Source Testbed for Evaluating Adversarial Robustness of VR Cybersickness Detection and Mitigation
Cryptography and Security
Tricks VR systems to ignore sickness.
Learning When to Ask: Simulation-Trained Humanoids for Mental-Health Diagnosis
Machine Learning (CS)
Trains robots to talk with people better.
Chatbots to strengthen democracy: An interdisciplinary seminar to train identifying argumentation techniques of science denial
Computers and Society
Teaches people to spot fake news using AI.