Synaspot: A Lightweight, Streaming Multi-modal Framework for Keyword Spotting with Audio-Text Synergy
By: Kewei Li , Yinan Zhong , Xiaotao Liang and more
Open-vocabulary keyword spotting (KWS) in continuous speech streams holds significant practical value across a wide range of real-world applications. While increasing attention has been paid to the role of different modalities in KWS, their effectiveness has been acknowledged. However, the increased parameter cost from multimodal integration and the constraints of end-to-end deployment have limited the practical applicability of such models. To address these challenges, we propose a lightweight, streaming multi-modal framework. First, we focus on multimodal enrollment features and reduce speaker-specific (voiceprint) information in the speech enrollment to extract speaker-irrelevant characteristics. Second, we effectively fuse speech and text features. Finally, we introduce a streaming decoding framework that only requires the encoder to extract features, which are then mathematically decoded with our three modal representations. Experiments on LibriPhase and WenetPrase demonstrate the performance of our model. Compared to existing streaming approaches, our method achieves better performance with significantly fewer parameters.
Similar Papers
Keyword Spotting with Hyper-Matched Filters for Small Footprint Devices
Audio and Speech Processing
Finds any word in speech, even new ones.
Joint Multimodal Contrastive Learning for Robust Spoken Term Detection and Keyword Spotting
Sound
Helps computers find words in spoken sounds.
End-to-End Efficiency in Keyword Spotting: A System-Level Approach for Embedded Microcontrollers
Sound
Makes small devices hear your voice commands better.