DSC2025 -- ViHallu Challenge: Detecting Hallucination in Vietnamese LLMs
By: Anh Thi-Hoang Nguyen , Khanh Quoc Tran , Tin Van Huynh and more
Potential Business Impact:
Helps AI avoid making up fake facts in Vietnamese.
The reliability of large language models (LLMs) in production environments remains significantly constrained by their propensity to generate hallucinations -- fluent, plausible-sounding outputs that contradict or fabricate information. While hallucination detection has recently emerged as a priority in English-centric benchmarks, low-to-medium resource languages such as Vietnamese remain inadequately covered by standardized evaluation frameworks. This paper introduces the DSC2025 ViHallu Challenge, the first large-scale shared task for detecting hallucinations in Vietnamese LLMs. We present the ViHallu dataset, comprising 10,000 annotated triplets of (context, prompt, response) samples systematically partitioned into three hallucination categories: no hallucination, intrinsic, and extrinsic hallucinations. The dataset incorporates three prompt types -- factual, noisy, and adversarial -- to stress-test model robustness. A total of 111 teams participated, with the best-performing system achieving a macro-F1 score of 84.80\%, compared to a baseline encoder-only score of 32.83\%, demonstrating that instruction-tuned LLMs with structured prompting and ensemble strategies substantially outperform generic architectures. However, the gap to perfect performance indicates that hallucination detection remains a challenging problem, particularly for intrinsic (contradiction-based) hallucinations. This work establishes a rigorous benchmark and explores a diverse range of detection methodologies, providing a foundation for future research into the trustworthiness and reliability of Vietnamese language AI systems.
Similar Papers
Detecting Hallucinations in Authentic LLM-Human Interactions
Computation and Language
Finds when AI lies in real conversations.
HalluVerse25: Fine-grained Multilingual Benchmark Dataset for LLM Hallucinations
Computation and Language
Helps AI tell truth from lies in many languages.
HalluLens: LLM Hallucination Benchmark
Computation and Language
Stops AI from making up fake answers.