Teaming LLMs to Detect and Mitigate Hallucinations
By: Demian Till , John Smeaton , Peter Haubrick and more
Potential Business Impact:
Combines different AI minds to reduce fake answers.
Recent work has demonstrated state-of-the-art results in large language model (LLM) hallucination detection and mitigation through consistency-based approaches which involve aggregating multiple responses sampled from a single LLM for a given prompt. These approaches help offset limitations stemming from the imperfect data on which LLMs are trained, which includes biases and under-representation of information required at deployment time among other limitations which can lead to hallucinations. We show that extending these single-model consistency methods to combine responses from multiple LLMs with different training data, training schemes and model architectures can result in substantial further improvements in hallucination detection and mitigation capabilities beyond their single-model consistency counterparts. We evaluate this \emph{consortium consistency} approach across many model teams from a pool of 15 LLMs and explore under what conditions it is beneficial to team together different LLMs in this manner. Further, we show that these performance improvements often come with reduced inference costs, offsetting a significant drawback with single-model consistency methods.
Similar Papers
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Computation and Language
Stops AI from making up wrong answers.
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Computation and Language
Stops AI from making up wrong answers.
Consistency Is the Key: Detecting Hallucinations in LLM Generated Text By Checking Inconsistencies About Key Facts
Computation and Language
Finds fake facts in computer writing.