Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning
By: Korel Gundem , Juncheng Dong , Dennis Zhang and more
Potential Business Impact:
Fixes AI mistakes by learning from examples.
In-Context Learning (ICL) allows Large Language Models (LLMs) to adapt to new tasks with just a few examples, but their predictions often suffer from systematic biases, leading to unstable performances in classification. While calibration techniques are proposed to mitigate these biases, we show that, in the logit space, many of these methods are equivalent to merely shifting the LLM's decision boundary without having the ability to alter its orientation. This proves inadequate when biases cause the LLM to be severely misdirected. To address these limitations and provide a unifying framework, we propose Supervised Calibration (SC), a loss-minimization based framework which learns an optimal, per-class affine transformation of the LLM's predictive probabilities in the logit space without requiring external data beyond the context. By using a more expressive functional class, SC not only subsumes many existing calibration methods in ICL as special cases, but also enables the ability to alter and even completely reverse the orientation of the LLM's decision boundary. Furthermore, SC's loss-based nature facilitates the seamless integration of two purpose-built regularization techniques: context-invariance and directional trust-region. The former is designed to tackle the instability issue in ICL, while the latter controls the degree of calibration. Finally, SC delivers state-of-the-art performance over calibration baselines in the 4-shot, 8-shot, and 16-shot settings across all nine datasets for Mistral-7B-Instruct-v0.3, LLaMA-2-7B-chat, and Qwen2-7B-Instruct.
Similar Papers
Surprise Calibration for Better In-Context Learning
Computation and Language
Makes AI smarter by fixing its unfair guesses.
The Alchemy of Thought: Understanding In-Context Learning Through Supervised Classification
Machine Learning (CS)
Makes AI learn new things by showing it examples.
Corrective In-Context Learning: Evaluating Self-Correction in Large Language Models
Computation and Language
Teaches computers to fix their own mistakes.