CAF-I: A Collaborative Multi-Agent Framework for Enhanced Irony Detection with Large Language Models
By: Ziqi. Liu, Ziyang. Zhou, Mingxuan. Hu
Potential Business Impact:
Helps computers understand sarcasm better.
Large language model (LLM) have become mainstream methods in the field of sarcasm detection. However, existing LLM methods face challenges in irony detection, including: 1. single-perspective limitations, 2. insufficient comprehensive understanding, and 3. lack of interpretability. This paper introduces the Collaborative Agent Framework for Irony (CAF-I), an LLM-driven multi-agent system designed to overcome these issues. CAF-I employs specialized agents for Context, Semantics, and Rhetoric, which perform multidimensional analysis and engage in interactive collaborative optimization. A Decision Agent then consolidates these perspectives, with a Refinement Evaluator Agent providing conditional feedback for optimization. Experiments on benchmark datasets establish CAF-I's state-of-the-art zero-shot performance. Achieving SOTA on the vast majority of metrics, CAF-I reaches an average Macro-F1 of 76.31, a 4.98 absolute improvement over the strongest prior baseline. This success is attained by its effective simulation of human-like multi-perspective analysis, enhancing detection accuracy and interpretability.
Similar Papers
CAMF: Collaborative Adversarial Multi-agent Framework for Machine Generated Text Detection
Computation and Language
Finds fake writing made by computers.
Cloud Investigation Automation Framework (CIAF): An AI-Driven Approach to Cloud Forensics
Cryptography and Security
Finds computer crimes faster and more accurately.
Manifesto from Dagstuhl Perspectives Workshop 24352 -- Conversational Agents: A Framework for Evaluation (CAFE)
Computation and Language
Helps computers understand and answer questions better.