Score: 1

CCNU at SemEval-2025 Task 3: Leveraging Internal and External Knowledge of Large Language Models for Multilingual Hallucination Annotation

Published: May 17, 2025 | arXiv ID: 2505.11965v1

By: Xu Liu, Guanyi Chen

Potential Business Impact:

Finds fake answers in computer questions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present the system developed by the Central China Normal University (CCNU) team for the Mu-SHROOM shared task, which focuses on identifying hallucinations in question-answering systems across 14 different languages. Our approach leverages multiple Large Language Models (LLMs) with distinct areas of expertise, employing them in parallel to annotate hallucinations, effectively simulating a crowdsourcing annotation process. Furthermore, each LLM-based annotator integrates both internal and external knowledge related to the input during the annotation process. Using the open-source LLM DeepSeek-V3, our system achieves the top ranking (\#1) for Hindi data and secures a Top-5 position in seven other languages. In this paper, we also discuss unsuccessful approaches explored during our development process and share key insights gained from participating in this shared task.

Country of Origin
🇨🇳 China

Page Count
7 pages

Category
Computer Science:
Computation and Language