RAL2M: Retrieval Augmented Learning-To-Match Against Hallucination in Compliance-Guaranteed Service Systems
By: Mengze Hong , Di Jiang , Jiangtao Wen and more
Potential Business Impact:
Stops AI from making up wrong answers.
Hallucination is a major concern in LLM-driven service systems, necessitating explicit knowledge grounding for compliance-guaranteed responses. In this paper, we introduce Retrieval-Augmented Learning-to-Match (RAL2M), a novel framework that eliminates generation hallucination by repositioning LLMs as query-response matching judges within a retrieval-based system, providing a robust alternative to purely generative approaches. To further mitigate judgment hallucination, we propose a query-adaptive latent ensemble strategy that explicitly models heterogeneous model competence and interdependencies among LLMs, deriving a calibrated consensus decision. Extensive experiments on large-scale benchmarks demonstrate that the proposed method effectively leverages the "wisdom of the crowd" and significantly outperforms strong baselines. Finally, we discuss best practices and promising directions for further exploiting latent representations in future work.
Similar Papers
Hybrid Retrieval for Hallucination Mitigation in Large Language Models: A Comparative Analysis
Information Retrieval
Makes AI tell the truth, not make things up.
Mitigating LLM Hallucination via Behaviorally Calibrated Reinforcement Learning
Machine Learning (CS)
Makes AI admit when it doesn't know.
Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems
Computation and Language
Makes AI tell the truth, not make things up.