Score: 1

HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification

Published: April 9, 2025 | arXiv ID: 2504.07069v1

By: Bibek Paudel , Alexander Lyzhov , Preetam Joshi and more

Potential Business Impact:

Finds when AI makes up wrong information.

Business Areas:
Semantic Search Internet Services

This paper introduces a comprehensive system for detecting hallucinations in large language model (LLM) outputs in enterprise settings. We present a novel taxonomy of LLM responses specific to hallucination in enterprise applications, categorizing them into context-based, common knowledge, enterprise-specific, and innocuous statements. Our hallucination detection model HDM-2 validates LLM responses with respect to both context and generally known facts (common knowledge). It provides both hallucination scores and word-level annotations, enabling precise identification of problematic content. To evaluate it on context-based and common-knowledge hallucinations, we introduce a new dataset HDMBench. Experimental results demonstrate that HDM-2 out-performs existing approaches across RagTruth, TruthfulQA, and HDMBench datasets. This work addresses the specific challenges of enterprise deployment, including computational efficiency, domain specialization, and fine-grained error identification. Our evaluation dataset, model weights, and inference code are publicly available.

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Computation and Language