Making Machines Sound Sarcastic: LLM-Enhanced and Retrieval-Guided Sarcastic Speech Synthesis
By: Zhu Li , Yuqing Zhang , Xiyuan Gao and more
Potential Business Impact:
Makes computer voices sound sarcastic and natural.
Sarcasm is a subtle form of non-literal language that poses significant challenges for speech synthesis due to its reliance on nuanced semantic, contextual, and prosodic cues. While existing speech synthesis research has focused primarily on broad emotional categories, sarcasm remains largely unexplored. In this paper, we propose a Large Language Model (LLM)-enhanced Retrieval-Augmented framework for sarcasm-aware speech synthesis. Our approach combines (1) semantic embeddings from a LoRA-fine-tuned LLaMA 3, which capture pragmatic incongruity and discourse-level cues of sarcasm, and (2) prosodic exemplars retrieved via a Retrieval Augmented Generation (RAG) module, which provide expressive reference patterns of sarcastic delivery. Integrated within a VITS backbone, this dual conditioning enables more natural and contextually appropriate sarcastic speech. Experiments demonstrate that our method outperforms baselines in both objective measures and subjective evaluations, yielding improvements in speech naturalness, sarcastic expressivity, and downstream sarcasm detection.
Similar Papers
Evaluating Multimodal Large Language Models on Spoken Sarcasm Understanding
Computation and Language
Helps computers understand jokes by voice, text, and face.
Leveraging Large Language Models for Sarcastic Speech Annotation in Sarcasm Detection
Computation and Language
Teaches computers to hear sarcasm in voices.
Context-Aware Pragmatic Metacognitive Prompting for Sarcasm Detection
Computation and Language
Helps computers understand jokes and sarcasm better.