Leveraging Whisper Embeddings for Audio-based Lyrics Matching
By: Eleonora Mancini , Joan Serrà , Paolo Torroni and more
Potential Business Impact:
Finds song lyrics from just the music.
Audio-based lyrics matching can be an appealing alternative to other content-based retrieval approaches, but existing methods often suffer from limited reproducibility and inconsistent baselines. In this work, we introduce WEALY, a fully reproducible pipeline that leverages Whisper decoder embeddings for lyrics matching tasks. WEALY establishes robust and transparent baselines, while also exploring multimodal extensions that integrate textual and acoustic features. Through extensive experiments on standard datasets, we demonstrate that WEALY achieves a performance comparable to state-of-the-art methods that lack reproducibility. In addition, we provide ablation studies and analyses on language robustness, loss functions, and embedding strategies. This work contributes a reliable benchmark for future research, and underscores the potential of speech technologies for music information retrieval tasks.
Similar Papers
Lyrics Matter: Exploiting the Power of Learnt Representations for Music Popularity Prediction
Sound
Predicts song hits using lyrics and sound.
Exploiting Music Source Separation for Automatic Lyrics Transcription with Whisper
Sound
Helps computers write song lyrics from music.
WAVE: Learning Unified & Versatile Audio-Visual Embeddings with Multimodal LLM
CV and Pattern Recognition
Lets computers understand sound, video, and words together.