Score: 0

CLASS-IT: Conversational and Lecture-Aligned Small-Scale Instruction Tuning for BabyLMs

Published: October 29, 2025 | arXiv ID: 2510.25364v1

By: Luca Capone, Alessandro Bondielli, Alessandro Lenci

Potential Business Impact:

Teaches small AI to be better at talking.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This work investigates whether small-scale LMs can benefit from instruction tuning. We compare conversational and question-answering instruction tuning datasets, applied either in a merged or sequential curriculum, using decoder-only models with 100M and 140M parameters. Evaluation spans both fine-tuning (SuperGLUE) and zero-shot (BLiMP, EWoK, WUGs, entity tracking, and psycholinguistic correlation) settings. Results show that instruction tuning yields small but consistent gains in fine-tuning scenarios, with sequential curricula outperforming merged data; however, improvements do not consistently transfer to zero-shot tasks, suggesting a trade-off between interaction-focused adaptation and broad linguistic generalization. These results highlight both the potential and the constraints of adapting human-inspired learning strategies to low-resource LMs, and point toward hybrid, curriculum-based approaches for enhancing generalization under ecological training limits.

Country of Origin
🇮🇹 Italy

Page Count
9 pages

Category
Computer Science:
Computation and Language