Conversational Alignment with Artificial Intelligence in Context
By: Rachel Katharine Sterken, James Ravi Kirkpatrick
Potential Business Impact:
AI talks like people, understanding what you mean.
The development of sophisticated artificial intelligence (AI) conversational agents based on large language models raises important questions about the relationship between human norms, values, and practices and AI design and performance. This article explores what it means for AI agents to be conversationally aligned to human communicative norms and practices for handling context and common ground and proposes a new framework for evaluating developers' design choices. We begin by drawing on the philosophical and linguistic literature on conversational pragmatics to motivate a set of desiderata, which we call the CONTEXT-ALIGN framework, for conversational alignment with human communicative practices. We then suggest that current large language model (LLM) architectures, constraints, and affordances may impose fundamental limitations on achieving full conversational alignment.
Similar Papers
Social Cooperation in Conversational AI Agents
Artificial Intelligence
Teaches AI to learn from long-term mistakes.
Human Preferences for Constructive Interactions in Language Model Alignment
Human-Computer Interaction
Teaches AI to talk nicely to everyone.
Towards Anthropomorphic Conversational AI Part I: A Practical Framework
Computation and Language
Makes AI chat more like talking to a person.