Score: 0

A Zero-shot Learning Method Based on Large Language Models for Multi-modal Knowledge Graph Embedding

Published: March 10, 2025 | arXiv ID: 2503.07202v2

By: Bingchen Liu , Jingchen Li , Yuanyuan Fang and more

Potential Business Impact:

Lets computers learn about new things without seeing them.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Zero-shot learning (ZL) is crucial for tasks involving unseen categories, such as natural language processing, image classification, and cross-lingual transfer.Current applications often fail to accurately infer and handle new relations orentities involving unseen categories, severely limiting their scalability and prac-ticality in open-domain scenarios. ZL learning faces the challenge of effectivelytransferring semantic information of unseen categories in multi-modal knowledgegraph (MMKG) embedding representation learning. In this paper, we proposeZSLLM, a framework for zero-shot embedding learning of MMKGs using largelanguage models (LLMs). We leverage textual modality information of unseencategories as prompts to fully utilize the reasoning capabilities of LLMs, enablingsemantic information transfer across different modalities for unseen categories.Through model-based learning, the embedding representation of unseen cate-gories in MMKG is enhanced. Extensive experiments conducted on multiplereal-world datasets demonstrate the superiority of our approach compared tostate-of-the-art methods.

Country of Origin
🇨🇳 China

Page Count
20 pages

Category
Computer Science:
Artificial Intelligence