RADAR: Retrieval-Augmented Detector with Adversarial Refinement for Robust Fake News Detection
By: Song-Duo Ma , Yi-Hung Liu , Hsin-Yu Lin and more
Potential Business Impact:
Finds fake news better by teaching computers to argue.
To efficiently combat the spread of LLM-generated misinformation, we present RADAR, a retrieval-augmented detector with adversarial refinement for robust fake news detection. Our approach employs a generator that rewrites real articles with factual perturbations, paired with a lightweight detector that verifies claims using dense passage retrieval. To enable effective co-evolution, we introduce verbal adversarial feedback (VAF). Rather than relying on scalar rewards, VAF issues structured natural-language critiques; these guide the generator toward more sophisticated evasion attempts, compelling the detector to adapt and improve. On a fake news detection benchmark, RADAR achieves 86.98% ROC-AUC, significantly outperforming general-purpose LLMs with retrieval. Ablation studies confirm that detector-side retrieval yields the largest gains, while VAF and few-shot demonstrations provide critical signals for robust training.
Similar Papers
RAAR: Retrieval Augmented Agentic Reasoning for Cross-Domain Misinformation Detection
Computation and Language
Finds fake news even in new places.
FVA-RAG: Falsification-Verification Alignment for Mitigating Sycophantic Hallucinations
Computation and Language
Stops AI from believing fake news.
RADAR: Recall Augmentation through Deferred Asynchronous Retrieval
Information Retrieval
Finds better videos for you faster.