Can LLMs Understand Unvoiced Speech? Exploring EMG-to-Text Conversion with LLMs
By: Payal Mohapatra , Akash Pandey , Xiaoyuan Zhang and more
Potential Business Impact:
Lets computers understand silent speech from muscle signals.
Unvoiced electromyography (EMG) is an effective communication tool for individuals unable to produce vocal speech. However, most prior methods rely on paired voiced and unvoiced EMG signals, along with speech data, for EMG-to-text conversion, which is not practical for such individuals. Given the rise of large language models (LLMs) in speech recognition, we explore their potential to understand unvoiced speech. To this end, we address the challenge of learning from unvoiced EMG alone and propose a novel EMG adaptor module that maps EMG features into an LLM's input space, achieving an average word error rate (WER) of 0.49 on a closed-vocabulary unvoiced EMG-to-text task. Even with a conservative data availability of just six minutes, our approach improves performance over specialized models by nearly 20%. While LLMs have been shown to be extendable to new language modalities -- such as audio -- understanding articulatory biosignals like unvoiced EMG remains more challenging. This work takes a crucial first step toward enabling LLMs to comprehend unvoiced speech using surface EMG.
Similar Papers
A Silent Speech Decoding System from EEG and EMG with Heterogenous Electrode Configurations
Quantitative Methods
Lets people talk with their minds.
TESU-LLM: Training Speech-LLMs Without Speech via Unified Encoder Alignment
Computation and Language
Teaches computers to understand speech without hearing it.
EmoSLLM: Parameter-Efficient Adaptation of LLMs for Speech Emotion Recognition
Audio and Speech Processing
Helps computers understand your feelings from your voice.