Leveraging Large Language Models for Sarcastic Speech Annotation in Sarcasm Detection
By: Zhu Li , Yuqing Zhang , Xiyuan Gao and more
Potential Business Impact:
Teaches computers to hear sarcasm in voices.
Sarcasm fundamentally alters meaning through tone and context, yet detecting it in speech remains a challenge due to data scarcity. In addition, existing detection systems often rely on multimodal data, limiting their applicability in contexts where only speech is available. To address this, we propose an annotation pipeline that leverages large language models (LLMs) to generate a sarcasm dataset. Using a publicly available sarcasm-focused podcast, we employ GPT-4o and LLaMA 3 for initial sarcasm annotations, followed by human verification to resolve disagreements. We validate this approach by comparing annotation quality and detection performance on a publicly available sarcasm dataset using a collaborative gating architecture. Finally, we introduce PodSarc, a large-scale sarcastic speech dataset created through this pipeline. The detection model achieves a 73.63% F1 score, demonstrating the dataset's potential as a benchmark for sarcasm detection research.
Similar Papers
Evaluating Multimodal Large Language Models on Spoken Sarcasm Understanding
Computation and Language
Helps computers understand jokes by voice, text, and face.
Making Machines Sound Sarcastic: LLM-Enhanced and Retrieval-Guided Sarcastic Speech Synthesis
Computation and Language
Makes computer voices sound sarcastic and natural.
On the Impact of Language Nuances on Sentiment Analysis with Large Language Models: Paraphrasing, Sarcasm, and Emojis
Computation and Language
Makes computers understand feelings in texts better.