Surprise Calibration for Better In-Context Learning
By: Zhihang Tan , Jingrui Hou , Ping Wang and more
Potential Business Impact:
Makes AI smarter by fixing its unfair guesses.
In-context learning (ICL) has emerged as a powerful paradigm for task adaptation in large language models (LLMs), where models infer underlying task structures from a few demonstrations. However, ICL remains susceptible to biases that arise from prior knowledge and contextual demonstrations, which can degrade the performance of LLMs. Existing bias calibration methods typically apply fixed class priors across all inputs, limiting their efficacy in dynamic ICL settings where the context for each query differs. To address these limitations, we adopt implicit sequential Bayesian inference as a framework for interpreting ICL, identify "surprise" as an informative signal for class prior shift, and introduce a novel method--Surprise Calibration (SC). SC leverages the notion of surprise to capture the temporal dynamics of class priors, providing a more adaptive and computationally efficient solution for in-context learning. We empirically demonstrate the superiority of SC over existing bias calibration techniques across a range of benchmark natural language processing tasks.
Similar Papers
Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning
Machine Learning (Stat)
Fixes AI mistakes by learning from examples.
Provable Low-Frequency Bias of In-Context Learning of Representations
Machine Learning (CS)
Makes computers learn new things from examples.
In-Context Learning with Hypothesis-Class Guidance
Machine Learning (CS)
Helps AI learn tasks better with instructions.