Towards Multi-modal Graph Large Language Model
By: Xin Wang , Zeyang Zhang , Linxin Xiao and more
Potential Business Impact:
Teaches computers to understand many kinds of connected information.
Multi-modal graphs, which integrate diverse multi-modal features and relations, are ubiquitous in real-world applications. However, existing multi-modal graph learning methods are typically trained from scratch for specific graph data and tasks, failing to generalize across various multi-modal graph data and tasks. To bridge this gap, we explore the potential of Multi-modal Graph Large Language Models (MG-LLM) to unify and generalize across diverse multi-modal graph data and tasks. We propose a unified framework of multi-modal graph data, task, and model, discovering the inherent multi-granularity and multi-scale characteristics in multi-modal graphs. Specifically, we present five key desired characteristics for MG-LLM: 1) unified space for multi-modal structures and attributes, 2) capability of handling diverse multi-modal graph tasks, 3) multi-modal graph in-context learning, 4) multi-modal graph interaction with natural language, and 5) multi-modal graph reasoning. We then elaborate on the key challenges, review related works, and highlight promising future research directions towards realizing these ambitious characteristics. Finally, we summarize existing multi-modal graph datasets pertinent for model training. We believe this paper can contribute to the ongoing advancement of the research towards MG-LLM for generalization across multi-modal graph data and tasks.
Similar Papers
Graph-MLLM: Harnessing Multimodal Large Language Models for Multimodal Graph Learning
Machine Learning (CS)
Helps computers understand pictures and words together.
MLaGA: Multimodal Large Language and Graph Assistant
Artificial Intelligence
Helps computers understand pictures and words together.
Multi-Modal Hypergraph Enhanced LLM Learning for Recommendation
Information Retrieval
Helps computers suggest better things you'll like.