LLM-based Few-Shot Early Rumor Detection with Imitation Agent
By: Fengzhu Zeng , Qian Shao , Ling Cheng and more
Early Rumor Detection (EARD) aims to identify the earliest point at which a claim can be accurately classified based on a sequence of social media posts. This is especially challenging in data-scarce settings. While Large Language Models (LLMs) perform well in few-shot NLP tasks, they are not well-suited for time-series data and are computationally expensive for both training and inference. In this work, we propose a novel EARD framework that combines an autonomous agent and an LLM-based detection model, where the agent acts as a reliable decision-maker for \textit{early time point determination}, while the LLM serves as a powerful \textit{rumor detector}. This approach offers the first solution for few-shot EARD, necessitating only the training of a lightweight agent and allowing the LLM to remain training-free. Extensive experiments on four real-world datasets show our approach boosts performance across LLMs and surpasses existing EARD methods in accuracy and earliness.
Similar Papers
FactGuard: Event-Centric and Commonsense-Guided Fake News Detection
Artificial Intelligence
Finds fake news by checking facts, not just writing.
Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning
Computation and Language
Teaches AI to learn and solve problems better.
Toward Better EHR Reasoning in LLMs: Reinforcement Learning with Expert Attention Guidance
Artificial Intelligence
Helps computers understand patient health records better.