Score: 0

Creating a Good Teacher for Knowledge Distillation in Acoustic Scene Classification

Published: March 14, 2025 | arXiv ID: 2503.11363v1

By: Tobias Morocutti , Florian Schmid , Khaled Koutini and more

Potential Business Impact:

Makes small computer programs learn big program skills.

Business Areas:
Knowledge Management Administrative Services

Knowledge Distillation (KD) is a widespread technique for compressing the knowledge of large models into more compact and efficient models. KD has proved to be highly effective in building well-performing low-complexity Acoustic Scene Classification (ASC) systems and was used in all the top-ranked submissions to this task of the annual DCASE challenge in the past three years. There is extensive research available on establishing the KD process, designing efficient student models, and forming well-performing teacher ensembles. However, less research has been conducted on investigating which teacher model attributes are beneficial for low-complexity students. In this work, we try to close this gap by studying the effects on the student's performance when using different teacher network architectures, varying the teacher model size, training them with different device generalization methods, and applying different ensembling strategies. The results show that teacher model sizes, device generalization methods, the ensembling strategy and the ensemble size are key factors for a well-performing student network.

Page Count
5 pages

Category
Computer Science:
Sound