Step-Audio-R1 Technical Report
By: Fei Tian , Xiangyu Tony Zhang , Yuxin Zhang and more
Potential Business Impact:
Makes computers understand sounds by thinking.
Recent advances in reasoning models have demonstrated remarkable success in text and vision domains through extended chain-of-thought deliberation. However, a perplexing phenomenon persists in audio language models: they consistently perform better with minimal or no reasoning, raising a fundamental question - can audio intelligence truly benefit from deliberate thinking? We introduce Step-Audio-R1, the first audio reasoning model that successfully unlocks reasoning capabilities in the audio domain. Through our proposed Modality-Grounded Reasoning Distillation (MGRD) framework, Step-Audio-R1 learns to generate audio-relevant reasoning chains that genuinely ground themselves in acoustic features rather than hallucinating disconnected deliberations. Our model exhibits strong audio reasoning capabilities, surpassing Gemini 2.5 Pro and achieving performance comparable to the state-of-the-art Gemini 3 Pro across comprehensive audio understanding and reasoning benchmarks spanning speech, environmental sounds, and music. These results demonstrate that reasoning is a transferable capability across modalities when appropriately anchored, transforming extended deliberation from a liability into a powerful asset for audio intelligence. By establishing the first successful audio reasoning model, Step-Audio-R1 opens new pathways toward building truly multimodal reasoning systems that think deeply across all sensory modalities.
Similar Papers
Step-Audio-R1 Technical Report
Artificial Intelligence
Helps computers understand sounds by thinking.
Audio-Thinker: Guiding Audio Language Model When and How to Think via Reinforcement Learning
Sound
Helps computers understand spoken questions better.
Audio-Thinker: Guiding Audio Language Model When and How to Think via Reinforcement Learning
Sound
Helps computers understand spoken questions better.