Score: 3

SAM Audio Judge: A Unified Multimodal Framework for Perceptual Evaluation of Audio Separation

Published: January 27, 2026 | arXiv ID: 2601.19702v1

By: Helin Wang , Bowen Shi , Andros Tjandra and more

BigTech Affiliations: Johns Hopkins University Meta

Potential Business Impact:

Tests sound quality automatically, like a human listener.

Business Areas:
Speech Recognition Data and Analytics, Software

The performance evaluation remains a complex challenge in audio separation, and existing evaluation metrics are often misaligned with human perception, course-grained, relying on ground truth signals. On the other hand, subjective listening tests remain the gold standard for real-world evaluation, but they are expensive, time-consuming, and difficult to scale. This paper addresses the growing need for automated systems capable of evaluating audio separation without human intervention. The proposed evaluation metric, SAM Audio Judge (SAJ), is a multimodal fine-grained reference-free objective metric, which shows highly alignment with human perceptions. SAJ supports three audio domains (speech, music and general sound events) and three prompt inputs (text, visual and span), covering four different dimensions of evaluation (recall, percision, faithfulness, and overall). SAM Audio Judge also shows potential applications in data filtering, pseudo-labeling large datasets and reranking in audio separation models. We release our code and pre-trained models at: https://github.com/facebookresearch/sam-audio.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
13 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing