Score: 1

Multimodal LLMs Do Not Compose Skills Optimally Across Modalities

Published: November 11, 2025 | arXiv ID: 2511.08113v2

By: Paula Ontalvilla, Aitor Ormazabal, Gorka Azkune

Potential Business Impact:

AI struggles to combine different skills to solve new problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Skill composition is the ability to combine previously learned skills to solve new tasks. As neural networks acquire increasingly complex skills during their pretraining, it is not clear how successfully they can compose them. In this paper, we focus on Multimodal Large Language Models (MLLM), and study their ability to compose skills across modalities. To this end, we design three evaluation tasks which can be solved sequentially composing two modality-dependent skills, and evaluate several open MLLMs under two main settings: i) prompting the model to directly solve the task, and ii) using a two-step cascaded inference approach, which manually enforces the composition of the two skills for a given task. Even with these straightforward compositions, we find that all evaluated MLLMs exhibit a significant cross-modality skill composition gap. To mitigate the aforementioned gap, we explore two alternatives: i) use chain-of-thought prompting to explicitly instruct MLLMs for skill composition and ii) a specific fine-tuning recipe to promote skill composition. Although those strategies improve model performance, they still exhibit significant skill composition gaps, suggesting that more research is needed to improve cross-modal skill composition in MLLMs.

Country of Origin
🇪🇸 Spain

Repos / Data Links

Page Count
34 pages

Category
Computer Science:
Computation and Language