Score: 1

How to Teach Large Multimodal Models New Skills

Published: October 9, 2025 | arXiv ID: 2510.08564v1

By: Zhen Zhu , Yiming Gong , Yao Xiao and more

Potential Business Impact:

Teaches AI new things without forgetting old ones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

How can we teach large multimodal models (LMMs) new skills without erasing prior abilities? We study sequential fine-tuning on five target skills while monitoring general ability on eight held-out benchmarks across three model families. We observe that apparent "forgetting" on held-out tasks after narrow fine-tuning can partly recover at later stages. We trace this behavior to a measurable shift in the output token distribution, manifested through a simple counting-bias probe that co-varies with forgetting. Guided by this picture, we identify two simple, robust tuning recipes that learn strongly while limiting drift: (i) updating only the self-attention projection layers, and (ii) updating only the MLP Gate&Up while freezing the Down projection. Across models and tasks, these choices deliver strong target gains while largely preserving held-out performance. Code is available at https://github.com/jessemelpolio/LMM_CL

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
37 pages

Category
Computer Science:
Artificial Intelligence