Teaching Audio Models to Reason: A Unified Framework for Source- and Layer-wise Distillation
By: Runyan Yang , Yuke Si , Yingying Gao and more
Potential Business Impact:
Teaches computers to understand speech reasoning.
While large audio language models excel at tasks like ASR and emotion recognition, they still struggle with complex reasoning due to the modality gap between audio and text as well as the lack of structured intermediate supervision. To address this, we propose a unified knowledge distillation framework to transfer reasoning capabilities from a high-capacity textual teacher model to a student audio models while preserving its acoustic competence. Our method introduces two key dimensions: source-wise distillation, which leverages both textual and acoustic teachers to provide complementary modality-specific supervision; and layer-wise distillation, which aligns teacher signals with appropriate student layers to improve transfer efficiency. This dual-dimensional strategy enables fine-grained control over the distillation process, effectively bridging the gap between symbolic reasoning and speech representations. Experimental results show significant improvements in audio reasoning performance, demonstrating the effectiveness of our framework as a reasoning transfer solution for audio modeling.
Similar Papers
SightSound-R1: Cross-Modal Reasoning Distillation from Vision to Audio Language Models
Sound
Teaches computers to understand sounds better.
Step-Audio-R1 Technical Report
Artificial Intelligence
Helps computers understand sounds by thinking.
Step-Audio-R1 Technical Report
Artificial Intelligence
Makes computers understand sounds by thinking.