Can LLMs Detect Intrinsic Hallucinations in Paraphrasing and Machine Translation?
By: Evangelia Gogoulou , Shorouq Zahra , Liane Guillou and more
Potential Business Impact:
Helps computers tell if their answers are true.
A frequently observed problem with LLMs is their tendency to generate output that is nonsensical, illogical, or factually incorrect, often referred to broadly as hallucination. Building on the recently proposed HalluciGen task for hallucination detection and generation, we evaluate a suite of open-access LLMs on their ability to detect intrinsic hallucinations in two conditional generation tasks: translation and paraphrasing. We study how model performance varies across tasks and language and we investigate the impact of model size, instruction tuning, and prompt choice. We find that performance varies across models but is consistent across prompts. Finally, we find that NLI models perform comparably well, suggesting that LLM-based detectors are not the only viable option for this specific task.
Similar Papers
Can LLMs Detect Their Own Hallucinations?
Computation and Language
Helps computers spot when they make up facts.
HalluLens: LLM Hallucination Benchmark
Computation and Language
Stops AI from making up fake answers.
Detecting Hallucinations in Authentic LLM-Human Interactions
Computation and Language
Finds when AI lies in real conversations.