Evaluating Role-Consistency in LLMs for Counselor Training
By: Eric Rudolph, Natalie Engert, Jens Albrecht
The rise of online counseling services has highlighted the need for effective training methods for future counselors. This paper extends research on VirCo, a Virtual Client for Online Counseling, designed to complement traditional role-playing methods in academic training by simulating realistic client interactions. Building on previous work, we introduce a new dataset incorporating adversarial attacks to test the ability of large language models (LLMs) to maintain their assigned roles (role-consistency). The study focuses on evaluating the role consistency and coherence of the Vicuna model's responses, comparing these findings with earlier research. Additionally, we assess and compare various open-source LLMs for their performance in sustaining role consistency during virtual client interactions. Our contributions include creating an adversarial dataset, evaluating conversation coherence and persona consistency, and providing a comparative analysis of different LLMs.
Similar Papers
Consistently Simulating Human Personas with Multi-Turn Reinforcement Learning
Computation and Language
Keeps AI characters acting like themselves.
Using LLMs to support assessment of student work in higher education: a viva voce simulator
Computers and Society
Helps teachers check if students wrote their own work.
Enhancing Persona Consistency for LLMs' Role-Playing using Persona-Aware Contrastive Learning
Computation and Language
Makes chatbots act more like real people.