Let's Roleplay: Examining LLM Alignment in Collaborative Dialogues
By: Abhijnan Nath, Carine Graff, Nikhil Krishnaswamy
Potential Business Impact:
Helps AI teams work together better and make smarter choices.
As Large Language Models (LLMs) integrate into diverse workflows, they are increasingly being considered "collaborators" with humans. If such AI collaborators are to be reliable, their behavior over multiturn interactions must be predictable, validated and verified before deployment. Common alignment techniques are typically developed under simplified single-user settings and do not account for the dynamics of long-horizon multiparty interactions. This paper examines how different alignment methods affect LLM agents' effectiveness as partners in multiturn, multiparty collaborations. We study this question through the lens of friction agents that intervene in group dialogues to encourage the collaborative group to slow down and reflect upon their reasoning for deliberative decision-making. Using a roleplay methodology, we evaluate interventions from differently-trained friction agents in collaborative task conversations. We propose a novel counterfactual evaluation framework that quantifies how friction interventions change the trajectory of group collaboration and belief alignment. Our results show that a friction-aware approach significantly outperforms common alignment baselines in helping both convergence to a common ground, or agreed-upon task-relevant propositions, and correctness of task outcomes.
Similar Papers
Evaluating Behavioral Alignment in Conflict Dialogue: A Multi-Dimensional Comparison of LLM Agents and Humans
Computation and Language
AI learns to argue and negotiate like people.
Learning "Partner-Aware" Collaborators in Multi-Party Collaboration
Artificial Intelligence
Teaches AI to work better with people.
Evaluating LLM-Generated Versus Human-Authored Responses in Role-Play Dialogues
Computation and Language
Computers get worse at talking over time.