Towards Automated Error Discovery: A Study in Conversational AI
By: Dominic Petrak, Thy Thy Tran, Iryna Gurevych
Potential Business Impact:
Finds hidden mistakes in talking computer programs.
Although LLM-based conversational agents demonstrate strong fluency and coherence, they still produce undesirable behaviors (errors) that are challenging to prevent from reaching users during deployment. Recent research leverages large language models (LLMs) to detect errors and guide response-generation models toward improvement. However, current LLMs struggle to identify errors not explicitly specified in their instructions, such as those arising from updates to the response-generation model or shifts in user behavior. In this work, we introduce Automated Error Discovery, a framework for detecting and defining errors in conversational AI, and propose SEEED (Soft Clustering Extended Encoder-Based Error Detection), as an encoder-based approach to its implementation. We enhance the Soft Nearest Neighbor Loss by amplifying distance weighting for negative samples and introduce Label-Based Sample Ranking to select highly contrastive examples for better representation learning. SEEED outperforms adapted baselines -- including GPT-4o and Phi-4 -- across multiple error-annotated dialogue datasets, improving the accuracy for detecting unknown errors by up to 8 points and demonstrating strong generalization to unknown intent detection.
Similar Papers
From Correctness to Comprehension: AI Agents for Personalized Error Diagnosis in Education
CV and Pattern Recognition
Helps AI understand why students make math mistakes.
Deceptive Automated Interpretability: Language Models Coordinating to Fool Oversight Systems
Artificial Intelligence
AI learns to trick humans by hiding secrets.
LLM-based Few-Shot Early Rumor Detection with Imitation Agent
Computation and Language
Find fake news faster with less data.