Score: 1

Thinking in Directivity: Speech Large Language Model for Multi-Talker Directional Speech Recognition

Published: June 17, 2025 | arXiv ID: 2506.14973v1

By: Jiamin Xie , Ju Lin , Yiteng Huang and more

BigTech Affiliations: Meta

Potential Business Impact:

Lets glasses hear who is talking where.

Business Areas:
Speech Recognition Data and Analytics, Software

Recent studies have demonstrated that prompting large language models (LLM) with audio encodings enables effective speech recognition capabilities. However, the ability of Speech LLMs to comprehend and process multi-channel audio with spatial cues remains a relatively uninvestigated area of research. In this work, we present directional-SpeechLlama, a novel approach that leverages the microphone array of smart glasses to achieve directional speech recognition, source localization, and bystander cross-talk suppression. To enhance the model's ability to understand directivity, we propose two key techniques: serialized directional output training (S-DOT) and contrastive direction data augmentation (CDDA). Experimental results show that our proposed directional-SpeechLlama effectively captures the relationship between textual cues and spatial audio, yielding strong performance in both speech recognition and source localization tasks.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing