Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
By: Leandra Fichtel , Maximilian Spliethöver , Eyke Hüllermeier and more
Potential Business Impact:
Helps computers teach you better by asking questions.
The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee's understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.
Similar Papers
Human Preferences for Constructive Interactions in Language Model Alignment
Human-Computer Interaction
Teaches AI to talk nicely to everyone.
From latent factors to language: a user study on LLM-generated explanations for an inherently interpretable matrix-based recommender system
Artificial Intelligence
Helps computers explain why they suggest things.
From latent factors to language: a user study on LLM-generated explanations for an inherently interpretable matrix-based recommender system
Artificial Intelligence
Helps people understand why computers suggest things.