A Zero-shot Learning Method Based on Large Language Models for Multi-modal Knowledge Graph Embedding
By: Bingchen Liu , Jingchen Li , Yuanyuan Fang and more
Potential Business Impact:
Lets computers learn about new things without seeing them.
Zero-shot learning (ZL) is crucial for tasks involving unseen categories, such as natural language processing, image classification, and cross-lingual transfer.Current applications often fail to accurately infer and handle new relations orentities involving unseen categories, severely limiting their scalability and prac-ticality in open-domain scenarios. ZL learning faces the challenge of effectivelytransferring semantic information of unseen categories in multi-modal knowledgegraph (MMKG) embedding representation learning. In this paper, we proposeZSLLM, a framework for zero-shot embedding learning of MMKGs using largelanguage models (LLMs). We leverage textual modality information of unseencategories as prompts to fully utilize the reasoning capabilities of LLMs, enablingsemantic information transfer across different modalities for unseen categories.Through model-based learning, the embedding representation of unseen cate-gories in MMKG is enhanced. Extensive experiments conducted on multiplereal-world datasets demonstrate the superiority of our approach compared tostate-of-the-art methods.
Similar Papers
Interpretable Zero-shot Learning with Infinite Class Concepts
CV and Pattern Recognition
Teaches computers to recognize new things without training.
Knowledge Graphs for Enhancing Large Language Models in Entity Disambiguation
Machine Learning (CS)
Helps computers understand facts better, avoiding mistakes.
Towards Multi-modal Graph Large Language Model
Machine Learning (CS)
Teaches computers to understand many kinds of connected information.