The Alchemy of Thought: Understanding In-Context Learning Through Supervised Classification
By: Harshita Narnoli, Mihai Surdeanu
Potential Business Impact:
Makes AI learn new things by showing it examples.
In-context learning (ICL) has become a prominent paradigm to rapidly customize LLMs to new tasks without fine-tuning. However, despite the empirical evidence of its usefulness, we still do not truly understand how ICL works. In this paper, we compare the behavior of in-context learning with supervised classifiers trained on ICL demonstrations to investigate three research questions: (1) Do LLMs with ICL behave similarly to classifiers trained on the same examples? (2) If so, which classifiers are closer, those based on gradient descent (GD) or those based on k-nearest neighbors (kNN)? (3) When they do not behave similarly, what conditions are associated with differences in behavior? Using text classification as a use case, with six datasets and three LLMs, we observe that LLMs behave similarly to these classifiers when the relevance of demonstrations is high. On average, ICL is closer to kNN than logistic regression, giving empirical evidence that the attention mechanism behaves more similarly to kNN than GD. However, when demonstration relevance is low, LLMs perform better than these classifiers, likely because LLMs can back off to their parametric memory, a luxury these classifiers do not have.
Similar Papers
Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning
Machine Learning (Stat)
Fixes AI mistakes by learning from examples.
On the Relationship Between the Choice of Representation and In-Context Learning
Computation and Language
Lets computers learn new things better.
Differentially Private In-Context Learning with Nearest Neighbor Search
Machine Learning (CS)
Protects your private info when AI learns.