End-to-End Audio-Visual Learning for Cochlear Implant Sound Coding in Noisy Environments
By: Meng-Ping Lin , Enoch Hsin-Ho Huang , Shao-Yi Chien and more
Potential Business Impact:
Helps deaf people hear better in noisy places.
The cochlear implant (CI) is a remarkable biomedical device that successfully enables individuals with severe-to-profound hearing loss to perceive sound by converting speech into electrical stimulation signals. Despite advancements in the performance of recent CI systems, speech comprehension in noisy or reverberant conditions remains a challenge. Recent and ongoing developments in deep learning reveal promising opportunities for enhancing CI sound coding capabilities, not only through replicating traditional signal processing methods with neural networks, but also through integrating visual cues as auxiliary data for multimodal speech processing. Therefore, this paper introduces a novel noise-suppressing CI system, AVSE-ECS, which utilizes an audio-visual speech enhancement (AVSE) model as a pre-processing module for the deep-learning-based ElectrodeNet-CS (ECS) sound coding strategy. Specifically, a joint training approach is applied to model AVSE-ECS, an end-to-end CI system. Experimental results indicate that the proposed method outperforms the previous ECS strategy in noisy conditions, with improved objective speech intelligibility scores. The methods and findings in this study demonstrate the feasibility and potential of using deep learning to integrate the AVSE module into an end-to-end CI system
Similar Papers
Audio-Visual Speech Enhancement: Architectural Design and Deployment Strategies
Sound
Cleans up noisy phone calls using sound and faces.
Audio-Visual Speech Enhancement In Complex Scenarios With Separation And Dereverberation Joint Modeling
Sound
Cleans up noisy speech using sight and sound.
Enhancing Cochlear Implant Signal Coding with Scaled Dot-Product Attention
Audio and Speech Processing
Makes hearing aids understand sounds better.