Score: 1

In-context Learning vs. Instruction Tuning: The Case of Small and Multilingual Language Models

Published: March 3, 2025 | arXiv ID: 2503.01611v2

By: David Ponce, Thierry Etchegoyhen

Potential Business Impact:

Teaches computers to follow instructions better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Instruction following is a critical ability for Large Language Models to perform downstream tasks. The standard approach to instruction alignment has relied on a specific phase of model tuning over curated instruction datasets, optionally complemented with an alignment step over human preferences. Recent work has shown the potential of in-context learning (ICL) alternatives to guide base models towards instruction following. This type of approach is particularly relevant to extend instruction following across languages and models of varying sizes adapted to different types of usage. In this work we compare ICL and instruction fine-tuning in English, French and Spanish, on Small Language Models, and provide experimental results on applying Direct Preference Optimisation (DPO) over base models. Our results show that scenarios involving multilingual and smaller models result in downgraded ICL instruction following performance, only partially mitigated by DPO alignment. This study aims to further our understanding of current strengths and limitations of alternative methods for instruction following.

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Computation and Language