Transfer Learning-Based Deep Residual Learning for Speech Recognition in Clean and Noisy Environments
By: Noussaiba Djeffal , Djamel Addou , Hamza Kheddar and more
Potential Business Impact:
Makes talking computers understand voices better.
Addressing the detrimental impact of non-stationary environmental noise on automatic speech recognition (ASR) has been a persistent and significant research focus. Despite advancements, this challenge continues to be a major concern. Recently, data-driven supervised approaches, such as deep neural networks, have emerged as promising alternatives to traditional unsupervised methods. With extensive training, these approaches have the potential to overcome the challenges posed by diverse real-life acoustic environments. In this light, this paper introduces a novel neural framework that incorporates a robust frontend into ASR systems in both clean and noisy environments. Utilizing the Aurora-2 speech database, the authors evaluate the effectiveness of an acoustic feature set for Mel-frequency, employing the approach of transfer learning based on Residual neural network (ResNet). The experimental results demonstrate a significant improvement in recognition accuracy compared to convolutional neural networks (CNN) and long short-term memory (LSTM) networks. They achieved accuracies of 98.94% in clean and 91.21% in noisy mode.
Similar Papers
Regularizing Learnable Feature Extraction for Automatic Speech Recognition
Audio and Speech Processing
Makes talking computers understand speech better.
Toward Noise-Aware Audio Deepfake Detection: Survey, SNR-Benchmarks, and Practical Recipes
Sound
Makes fake voices sound real, even with noise.
Unsupervised Speech Enhancement using Data-defined Priors
Audio and Speech Processing
Cleans up noisy voices without needing perfect examples.