Score: 3

PerHalluEval: Persian Hallucination Evaluation Benchmark for Large Language Models

Published: September 25, 2025 | arXiv ID: 2509.21104v1

By: Mohammad Hosseini , Kimia Hosseini , Shayan Bali and more

Potential Business Impact:

Helps computers avoid making up Persian facts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Hallucination is a persistent issue affecting all large language Models (LLMs), particularly within low-resource languages such as Persian. PerHalluEval (Persian Hallucination Evaluation) is the first dynamic hallucination evaluation benchmark tailored for the Persian language. Our benchmark leverages a three-stage LLM-driven pipeline, augmented with human validation, to generate plausible answers and summaries regarding QA and summarization tasks, focusing on detecting extrinsic and intrinsic hallucinations. Moreover, we used the log probabilities of generated tokens to select the most believable hallucinated instances. In addition, we engaged human annotators to highlight Persian-specific contexts in the QA dataset in order to evaluate LLMs' performance on content specifically related to Persian culture. Our evaluation of 12 LLMs, including open- and closed-source models using PerHalluEval, revealed that the models generally struggle in detecting hallucinated Persian text. We showed that providing external knowledge, i.e., the original document for the summarization task, could mitigate hallucination partially. Furthermore, there was no significant difference in terms of hallucination when comparing LLMs specifically trained for Persian with others.

Country of Origin
🇬🇧 🇮🇷 Iran, United Kingdom

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Computation and Language