CLASS-IT: Conversational and Lecture-Aligned Small-Scale Instruction Tuning for BabyLMs
By: Luca Capone, Alessandro Bondielli, Alessandro Lenci
Potential Business Impact:
Teaches small AI to be better at talking.
This work investigates whether small-scale LMs can benefit from instruction tuning. We compare conversational and question-answering instruction tuning datasets, applied either in a merged or sequential curriculum, using decoder-only models with 100M and 140M parameters. Evaluation spans both fine-tuning (SuperGLUE) and zero-shot (BLiMP, EWoK, WUGs, entity tracking, and psycholinguistic correlation) settings. Results show that instruction tuning yields small but consistent gains in fine-tuning scenarios, with sequential curricula outperforming merged data; however, improvements do not consistently transfer to zero-shot tasks, suggesting a trade-off between interaction-focused adaptation and broad linguistic generalization. These results highlight both the potential and the constraints of adapting human-inspired learning strategies to low-resource LMs, and point toward hybrid, curriculum-based approaches for enhancing generalization under ecological training limits.
Similar Papers
Towards Alignment-Centric Paradigm: A Survey of Instruction Tuning in Large Language Models
Computation and Language
Teaches AI to follow instructions better.
Exploring Fine-Tuning of Large Audio Language Models for Spoken Language Understanding under Limited Speech data
Sound
Teaches computers to understand speech better with less data.
Fine-Tuning Causal LLMs for Text Classification: Embedding-Based vs. Instruction-Based Approaches
Computation and Language
Makes big AI models learn new jobs with less power.