Score: 0

End-to-End Audio-Visual Learning for Cochlear Implant Sound Coding in Noisy Environments

Published: August 19, 2025 | arXiv ID: 2508.13576v1

By: Meng-Ping Lin , Enoch Hsin-Ho Huang , Shao-Yi Chien and more

Potential Business Impact:

Helps deaf people hear better in noisy places.

Business Areas:
Speech Recognition Data and Analytics, Software

The cochlear implant (CI) is a remarkable biomedical device that successfully enables individuals with severe-to-profound hearing loss to perceive sound by converting speech into electrical stimulation signals. Despite advancements in the performance of recent CI systems, speech comprehension in noisy or reverberant conditions remains a challenge. Recent and ongoing developments in deep learning reveal promising opportunities for enhancing CI sound coding capabilities, not only through replicating traditional signal processing methods with neural networks, but also through integrating visual cues as auxiliary data for multimodal speech processing. Therefore, this paper introduces a novel noise-suppressing CI system, AVSE-ECS, which utilizes an audio-visual speech enhancement (AVSE) model as a pre-processing module for the deep-learning-based ElectrodeNet-CS (ECS) sound coding strategy. Specifically, a joint training approach is applied to model AVSE-ECS, an end-to-end CI system. Experimental results indicate that the proposed method outperforms the previous ECS strategy in noisy conditions, with improved objective speech intelligibility scores. The methods and findings in this study demonstrate the feasibility and potential of using deep learning to integrate the AVSE module into an end-to-end CI system

Page Count
6 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing