Do Retrieval Augmented Language Models Know When They Don't Know?
By: Youchao Zhou , Heyan Huang , Yicheng Liu and more
Potential Business Impact:
Makes AI admit when it doesn't know.
Existing Large Language Models (LLMs) occasionally generate plausible yet factually incorrect responses, known as hallucinations. Researchers are primarily using two approaches to mitigate hallucinations, namely Retrieval Augmented Language Models (RALMs) and refusal post-training. However, current research predominantly emphasizes their individual effectiveness while overlooking the evaluation of the refusal capability of RALMs. In this study, we ask the fundamental question: Do RALMs know when they don't know? Specifically, we ask three questions. First, are RALMs well-calibrated regarding different internal and external knowledge states? We examine the influence of various factors. Contrary to expectations, we find that LLMs exhibit significant \textbf{over-refusal} behavior. Then, how does refusal post-training affect the over-refusal issue? We investigate the Refusal-aware Instruction Tuning and In-Context Fine-tuning methods. Our results show that the over-refusal problem is mitigated by In-context fine-tuning. but magnified by R-tuning. However, we also find that the refusal ability may conflict with the quality of the answer. Finally, we develop a simple yet effective refusal method for refusal post-trained models to improve their overall answer quality in terms of refusal and correct answers. Our study provides a more comprehensive understanding of the influence of important factors on RALM systems.
Similar Papers
Retrieval Augmented Learning: A Retrial-based Large Language Model Self-Supervised Learning and Autonomous Knowledge Generation
Artificial Intelligence
Helps computers learn without expensive training.
High Accuracy, Less Talk (HALT): Reliable LLMs through Capability-Aligned Finetuning
Computation and Language
Makes AI say "I don't know" when unsure.
Large Language Models Do NOT Really Know What They Don't Know
Computation and Language
Computers can't tell true from fake facts.