SGIC: A Self-Guided Iterative Calibration Framework for RAG
By: Guanhua Chen , Yutong Yao , Lidia S. Chao and more
Potential Business Impact:
Makes AI smarter by checking its own answers.
Recent research in retrieval-augmented generation (RAG) has concentrated on retrieving useful information from candidate documents. However, numerous methodologies frequently neglect the calibration capabilities of large language models (LLMs), which capitalize on their robust in-context reasoning prowess. This work illustrates that providing LLMs with specific cues substantially improves their calibration efficacy, especially in multi-round calibrations. We present a new SGIC: Self-Guided Iterative Calibration Framework that employs uncertainty scores as a tool. Initially, this framework calculates uncertainty scores to determine both the relevance of each document to the query and the confidence level in the responses produced by the LLMs. Subsequently, it reevaluates these scores iteratively, amalgamating them with prior responses to refine calibration. Furthermore, we introduce an innovative approach for constructing an iterative self-calibration training set, which optimizes LLMs to efficiently harness uncertainty scores for capturing critical information and enhancing response accuracy. Our proposed framework significantly improves performance on both closed-source and open-weight LLMs.
Similar Papers
Knowing You Don't Know: Learning When to Continue Search in Multi-round RAG through Self-Practicing
Artificial Intelligence
Helps AI know when it has enough answers.
LLM-Centric RAG with Multi-Granular Indexing and Confidence Constraints
Computation and Language
Makes AI answer questions more accurately and reliably.
LLM-Independent Adaptive RAG: Let the Question Speak for Itself
Computation and Language
Smartly finds answers, saving computer power.