Score: 2

Audio-JEPA: Joint-Embedding Predictive Architecture for Audio Representation Learning

Published: June 25, 2025 | arXiv ID: 2507.02915v1

By: Ludovic Tuncay , Etienne Labbé , Emmanouil Benetos and more

Potential Business Impact:

Teaches computers to understand sounds with less data.

Business Areas:
Podcast Media and Entertainment, Music and Audio

Building on the Joint-Embedding Predictive Architecture (JEPA) paradigm, a recent self-supervised learning framework that predicts latent representations of masked regions in high-level feature spaces, we propose Audio-JEPA (Audio Joint-Embedding Predictive Architecture), tailored specifically for audio data. Audio-JEPA uses a simple Vision Transformer backbone to predict latent representations of masked spectrogram patches rather than reconstructing raw audio. We pre-train on unlabeled AudioSet clips (10s, 32kHz) with random patch masking on mel-spectrograms. We evaluate on the X-ARES suite covering speech, music, and environmental sound tasks. Although our implementation is a straightforward translation of the original model to audio, the results still show comparable performance to wav2vec 2.0 and data2vec while using less than one-fifth of their training data and with no hyper-parameter tuning. All code and pretrained checkpoints will be released on GitHub.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Sound