HalluDetect: Detecting, Mitigating, and Benchmarking Hallucinations in Conversational Systems
By: Spandan Anaokar , Shrey Ganatra , Harshvivek Kashid and more
Potential Business Impact:
Makes chatbots tell the truth, not make things up.
Large Language Models (LLMs) are widely used in industry but remain prone to hallucinations, limiting their reliability in critical applications. This work addresses hallucination reduction in consumer grievance chatbots built using LLaMA 3.1 8B Instruct, a compact model frequently used in industry. We develop HalluDetect, an LLM-based hallucination detection system that achieves an F1 score of 69% outperforming baseline detectors by 25.44%. Benchmarking five chatbot architectures, we find that out of them, AgentBot minimizes hallucinations to 0.4159 per turn while maintaining the highest token accuracy (96.13%), making it the most effective mitigation strategy. Our findings provide a scalable framework for hallucination mitigation, demonstrating that optimized inference strategies can significantly improve factual accuracy. While applied to consumer law, our approach generalizes to other high-risk domains, enhancing trust in LLM-driven assistants. We will release the code and dataset
Similar Papers
Detecting Hallucinations in Authentic LLM-Human Interactions
Computation and Language
Finds when AI lies in real conversations.
HalluClean: A Unified Framework to Combat Hallucinations in LLMs
Computation and Language
Fixes computer writing to be truthful and correct.
Teaming LLMs to Detect and Mitigate Hallucinations
Machine Learning (CS)
Combines different AI minds to reduce fake answers.