Self-Improvement in Multimodal Large Language Models: A Survey
By: Shijian Deng , Kai Wang , Tianyu Yang and more
Potential Business Impact:
Makes AI smarter with more kinds of information.
Recent advancements in self-improvement for Large Language Models (LLMs) have efficiently enhanced model capabilities without significantly increasing costs, particularly in terms of human effort. While this area is still relatively young, its extension to the multimodal domain holds immense potential for leveraging diverse data sources and developing more general self-improving models. This survey is the first to provide a comprehensive overview of self-improvement in Multimodal LLMs (MLLMs). We provide a structured overview of the current literature and discuss methods from three perspectives: 1) data collection, 2) data organization, and 3) model optimization, to facilitate the further development of self-improvement in MLLMs. We also include commonly used evaluations and downstream applications. Finally, we conclude by outlining open challenges and future research directions.
Similar Papers
When Continue Learning Meets Multimodal Large Language Model: A Survey
Machine Learning (CS)
Helps AI learn new things without forgetting old ones.
Large Multimodal Models-Empowered Task-Oriented Autonomous Communications: Design Methodology and Implementation Challenges
Machine Learning (CS)
AI helps machines talk and work together better.
Multimodal Large Language Models for Medicine: A Comprehensive Survey
Machine Learning (CS)
Helps doctors understand sickness using pictures and words.