Score: 3

Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?

Published: April 29, 2025 | arXiv ID: 2504.20699v1

By: Evangelia Gogoulou , Shorouq Zahra , Liane Guillou and more

Potential Business Impact:

Helps computers tell if their answers are true.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

A frequently observed problem with LLMs is their tendency to generate output that is nonsensical, illogical, or factually incorrect, often referred to broadly as hallucination. Building on the recently proposed HalluciGen task for hallucination detection and generation, we evaluate a suite of open-access LLMs on their ability to detect intrinsic hallucinations in two conditional generation tasks: translation and paraphrasing. We study how model performance varies across tasks and language and we investigate the impact of model size, instruction tuning, and prompt choice. We find that performance varies across models but is consistent across prompts. Finally, we find that NLI models perform comparably well, suggesting that LLM-based detectors are not the only viable option for this specific task.

Country of Origin
🇬🇧 🇸🇪 Sweden, United Kingdom

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Computation and Language