SAM Audio Judge: A Unified Multimodal Framework for Perceptual Evaluation of Audio Separation
By: Helin Wang , Bowen Shi , Andros Tjandra and more
Potential Business Impact:
Tests sound quality automatically, like a human listener.
The performance evaluation remains a complex challenge in audio separation, and existing evaluation metrics are often misaligned with human perception, course-grained, relying on ground truth signals. On the other hand, subjective listening tests remain the gold standard for real-world evaluation, but they are expensive, time-consuming, and difficult to scale. This paper addresses the growing need for automated systems capable of evaluating audio separation without human intervention. The proposed evaluation metric, SAM Audio Judge (SAJ), is a multimodal fine-grained reference-free objective metric, which shows highly alignment with human perceptions. SAJ supports three audio domains (speech, music and general sound events) and three prompt inputs (text, visual and span), covering four different dimensions of evaluation (recall, percision, faithfulness, and overall). SAM Audio Judge also shows potential applications in data filtering, pseudo-labeling large datasets and reranking in audio separation models. We release our code and pre-trained models at: https://github.com/facebookresearch/sam-audio.
Similar Papers
AudioJudge: Understanding What Works in Large Audio Model Based Speech Evaluation
Computation and Language
Lets computers judge speech quality like people.
Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound
Sound
Lets computers judge music quality automatically.
ReFESS-QI: Reference-Free Evaluation For Speech Separation With Joint Quality And Intelligibility Scoring
Audio and Speech Processing
Cleans up noisy sounds without needing original audio.