Corrective In-Context Learning: Evaluating Self-Correction in Large Language Models
By: Mario Sanz-Guerrero, Katharina von der Wense
Potential Business Impact:
Teaches computers to fix their own mistakes.
In-context learning (ICL) has transformed the use of large language models (LLMs) for NLP tasks, enabling few-shot learning by conditioning on labeled examples without finetuning. Despite its effectiveness, ICL is prone to errors, especially for challenging examples. With the goal of improving the performance of ICL, we propose corrective in-context learning (CICL), an approach that incorporates a model's incorrect predictions alongside ground truth corrections into the prompt, aiming to enhance classification accuracy through self-correction. However, contrary to our hypothesis, extensive experiments on text classification tasks demonstrate that CICL consistently underperforms standard ICL, with performance degrading as the proportion of corrections in the prompt increases. Our findings indicate that CICL introduces confusion by disrupting the model's task understanding, rather than refining its predictions. Additionally, we observe that presenting harder examples in standard ICL does not improve performance, suggesting that example difficulty alone may not be a reliable criterion for effective selection. By presenting these negative results, we provide important insights into the limitations of self-corrective mechanisms in LLMs and offer directions for future research.
Similar Papers
Efficient Text Classification with Conformal In-Context Learning
Computation and Language
Makes AI smarter and faster for reading text.
Is In-Context Learning Learning?
Computation and Language
Computers learn new things from examples, not just memorizing.
Is In-Context Learning Learning?
Computation and Language
Computers learn new things from examples, not just memorizing.