In-context Learning vs. Instruction Tuning: The Case of Small and Multilingual Language Models
By: David Ponce, Thierry Etchegoyhen
Potential Business Impact:
Teaches computers to follow instructions better.
Instruction following is a critical ability for Large Language Models to perform downstream tasks. The standard approach to instruction alignment has relied on a specific phase of model tuning over curated instruction datasets, optionally complemented with an alignment step over human preferences. Recent work has shown the potential of in-context learning (ICL) alternatives to guide base models towards instruction following. This type of approach is particularly relevant to extend instruction following across languages and models of varying sizes adapted to different types of usage. In this work we compare ICL and instruction fine-tuning in English, French and Spanish, on Small Language Models, and provide experimental results on applying Direct Preference Optimisation (DPO) over base models. Our results show that scenarios involving multilingual and smaller models result in downgraded ICL instruction following performance, only partially mitigated by DPO alignment. This study aims to further our understanding of current strengths and limitations of alternative methods for instruction following.
Similar Papers
Improving Instruct Models for Free: A Study on Partial Adaptation
Computation and Language
Makes AI better at learning from examples.
Towards Alignment-Centric Paradigm: A Survey of Instruction Tuning in Large Language Models
Computation and Language
Teaches AI to follow instructions better.
You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Model
Computation and Language
Teaches computers to do many jobs well at once.