Score: 1

HalluDetect: Detecting, Mitigating, and Benchmarking Hallucinations in Conversational Systems

Published: September 15, 2025 | arXiv ID: 2509.11619v1

By: Spandan Anaokar , Shrey Ganatra , Harshvivek Kashid and more

Potential Business Impact:

Makes chatbots tell the truth, not make things up.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are widely used in industry but remain prone to hallucinations, limiting their reliability in critical applications. This work addresses hallucination reduction in consumer grievance chatbots built using LLaMA 3.1 8B Instruct, a compact model frequently used in industry. We develop HalluDetect, an LLM-based hallucination detection system that achieves an F1 score of 69% outperforming baseline detectors by 25.44%. Benchmarking five chatbot architectures, we find that out of them, AgentBot minimizes hallucinations to 0.4159 per turn while maintaining the highest token accuracy (96.13%), making it the most effective mitigation strategy. Our findings provide a scalable framework for hallucination mitigation, demonstrating that optimized inference strategies can significantly improve factual accuracy. While applied to consumer law, our approach generalizes to other high-risk domains, enhancing trust in LLM-driven assistants. We will release the code and dataset

Country of Origin
🇮🇳 India

Repos / Data Links

Page Count
24 pages

Category
Computer Science:
Computation and Language