QASTAnet: A DNN-based Quality Metric for Spatial Audio
By: Adrien Llave, Emma Granier, Grégory Pallone
Potential Business Impact:
Tests sound quality faster and cheaper.
In the development of spatial audio technologies, reliable and shared methods for evaluating audio quality are essential. Listening tests are currently the standard but remain costly in terms of time and resources. Several models predicting subjective scores have been proposed, but they do not generalize well to real-world signals. In this paper, we propose QASTAnet (Quality Assessment for SpaTial Audio network), a new metric based on a deep neural network, specialized on spatial audio (ambisonics and binaural). As training data is scarce, we aim for the model to be trainable with a small amount of data. To do so, we propose to rely on expert modeling of the low-level auditory system and use a neurnal network to model the high-level cognitive function of the quality judgement. We compare its performance to two reference metrics on a wide range of content types (speech, music, ambiance, anechoic, reverberated) and focusing on codec artifacts. Results demonstrate that QASTAnet overcomes the aforementioned limitations of the existing methods. The strong correlation between the proposed metric prediction and subjective scores makes it a good candidate for comparing codecs in their development.
Similar Papers
AuralNet: Hierarchical Attention-based 3D Binaural Localization of Overlapping Speakers
Audio and Speech Processing
Finds sounds in 3D, even when mixed.
Evaluating Objective Speech Quality Metrics for Neural Audio Codecs
Sound
Helps pick the best way to test audio quality.
BINAQUAL: A Full-Reference Objective Localization Similarity Metric for Binaural Audio
Audio and Speech Processing
Checks if 3D sound is in the right place.