Detecting Mental Manipulation in Speech via Synthetic Multi-Speaker Dialogue
By: Run Chen , Wen Liang , Ziwei Gong and more
Potential Business Impact:
Detects sneaky talk in voices, not just words.
Mental manipulation, the strategic use of language to covertly influence or exploit others, is a newly emerging task in computational social reasoning. Prior work has focused exclusively on textual conversations, overlooking how manipulative tactics manifest in speech. We present the first study of mental manipulation detection in spoken dialogues, introducing a synthetic multi-speaker benchmark SPEECHMENTALMANIP that augments a text-based dataset with high-quality, voice-consistent Text-to-Speech rendered audio. Using few-shot large audio-language models and human annotation, we evaluate how modality affects detection accuracy and perception. Our results reveal that models exhibit high specificity but markedly lower recall on speech compared to text, suggesting sensitivity to missing acoustic or prosodic cues in training. Human raters show similar uncertainty in the audio setting, underscoring the inherent ambiguity of manipulative speech. Together, these findings highlight the need for modality-aware evaluation and safety alignment in multimodal dialogue systems.
Similar Papers
SELF-PERCEPT: Introspection Improves Large Language Models' Detection of Multi-Person Mental Manipulation in Conversations
Computation and Language
Helps computers spot sneaky mind games in talks.
Benchmarking Gaslighting Attacks Against Speech Large Language Models
Computation and Language
Makes voice AI less likely to be tricked.
ChatbotManip: A Dataset to Facilitate Evaluation and Oversight of Manipulative Chatbot Behaviour
Computation and Language
Teaches computers to spot when chatbots try to trick you.