Complementarity-driven Representation Learning for Multi-modal Knowledge Graph Completion
By: Lijian Li
Potential Business Impact:
Helps computers learn more from pictures and words.
Multi-modal Knowledge Graph Completion (MMKGC) aims to uncover hidden world knowledge in multimodal knowledge graphs by leveraging both multimodal and structural entity information. However, the inherent imbalance in multimodal knowledge graphs, where modality distributions vary across entities, poses challenges in utilizing additional modality data for robust entity representation. Existing MMKGC methods typically rely on attention or gate-based fusion mechanisms but overlook complementarity contained in multi-modal data. In this paper, we propose a novel framework named Mixture of Complementary Modality Experts (MoCME), which consists of a Complementarity-guided Modality Knowledge Fusion (CMKF) module and an Entropy-guided Negative Sampling (EGNS) mechanism. The CMKF module exploits both intra-modal and inter-modal complementarity to fuse multi-view and multi-modal embeddings, enhancing representations of entities. Additionally, we introduce an Entropy-guided Negative Sampling mechanism to dynamically prioritize informative and uncertain negative samples to enhance training effectiveness and model robustness. Extensive experiments on five benchmark datasets demonstrate that our MoCME achieves state-of-the-art performance, surpassing existing approaches.
Similar Papers
HERGC: Heterogeneous Experts Representation and Generative Completion for Multimodal Knowledge Graphs
Computation and Language
Helps computers understand pictures and words to find missing facts.
ELMM: Efficient Lightweight Multimodal Large Language Models for Multimodal Knowledge Graph Completion
Artificial Intelligence
Helps computers understand pictures and words better.
Towards Structure-aware Model for Multi-modal Knowledge Graph Completion
Multimedia
Helps computers understand pictures and words together.