Double Entendre: Robust Audio-Based AI-Generated Lyrics Detection via Multi-View Fusion
By: Markus Frohmann , Gabriel Meseguer-Brocal , Markus Schedl and more
Potential Business Impact:
Finds fake music by listening to singing.
The rapid advancement of AI-based music generation tools is revolutionizing the music industry but also posing challenges to artists, copyright holders, and providers alike. This necessitates reliable methods for detecting such AI-generated content. However, existing detectors, relying on either audio or lyrics, face key practical limitations: audio-based detectors fail to generalize to new or unseen generators and are vulnerable to audio perturbations; lyrics-based methods require cleanly formatted and accurate lyrics, unavailable in practice. To overcome these limitations, we propose a novel, practically grounded approach: a multimodal, modular late-fusion pipeline that combines automatically transcribed sung lyrics and speech features capturing lyrics-related information within the audio. By relying on lyrical aspects directly from audio, our method enhances robustness, mitigates susceptibility to low-level artifacts, and enables practical applicability. Experiments show that our method, DE-detect, outperforms existing lyrics-based detectors while also being more robust to audio perturbations. Thus, it offers an effective, robust solution for detecting AI-generated music in real-world scenarios. Our code is available at https://github.com/deezer/robust-AI-lyrics-detection.
Similar Papers
FusionAudio-1.2M: Towards Fine-grained Audio Captioning with Multimodal Contextual Fusion
Sound
Makes computers describe sounds with more detail.
A Practical Synthesis of Detecting AI-Generated Textual, Visual, and Audio Content
Computation and Language
Finds fake pictures, words, and sounds made by computers.
DeepAgent: A Dual Stream Multi Agent Fusion for Robust Multimodal Deepfake Detection
CV and Pattern Recognition
Finds fake videos and audio better.