Spatial Audio Processing with Large Language Model on Wearable Devices
By: Ayushi Mishra , Yang Bai , Priyadarshan Narayanasamy and more
Potential Business Impact:
Listens to where sounds come from.
Integrating spatial context into large language models (LLMs) has the potential to revolutionize human-computer interaction, particularly in wearable devices. In this work, we present a novel system architecture that incorporates spatial speech understanding into LLMs, enabling contextually aware and adaptive applications for wearable technologies. Our approach leverages microstructure-based spatial sensing to extract precise Direction of Arrival (DoA) information using a monaural microphone. To address the lack of existing dataset for microstructure-assisted speech recordings, we synthetically create a dataset called OmniTalk by using the LibriSpeech dataset. This spatial information is fused with linguistic embeddings from OpenAI's Whisper model, allowing each modality to learn complementary contextual representations. The fused embeddings are aligned with the input space of LLaMA-3.2 3B model and fine-tuned with lightweight adaptation technique LoRA to optimize for on-device processing. SING supports spatially-aware automatic speech recognition (ASR), achieving a mean error of $25.72^\circ$-a substantial improvement compared to the 88.52$^\circ$ median error in existing work-with a word error rate (WER) of 5.3. SING also supports soundscaping, for example, inference how many people were talking and their directions, with up to 5 people and a median DoA error of 16$^\circ$. Our system demonstrates superior performance in spatial speech understanding while addressing the challenges of power efficiency, privacy, and hardware constraints, paving the way for advanced applications in augmented reality, accessibility, and immersive experiences.
Similar Papers
Thinking in Directivity: Speech Large Language Model for Multi-Talker Directional Speech Recognition
Audio and Speech Processing
Lets glasses hear who is talking where.
OWL: Geometry-Aware Spatial Reasoning for Audio Large Language Models
Sound
Helps computers hear where sounds come from.
SpatialLM: Training Large Language Models for Structured Indoor Modeling
CV and Pattern Recognition
Lets computers understand 3D spaces like rooms.