Optimizing Neural Architectures for Hindi Speech Separation and Enhancement in Noisy Environments
By: Arnav Ramamoorthy
Potential Business Impact:
Cleans up noisy Hindi speech for better listening.
This paper addresses the challenges of Hindi speech separation and enhancement using advanced neural network architectures, with a focus on edge devices. We propose a refined approach leveraging the DEMUCS model to overcome limitations of traditional methods, achieving substantial improvements in speech clarity and intelligibility. The model is fine-tuned with U-Net and LSTM layers, trained on a dataset of 400,000 Hindi speech clips augmented with ESC-50 and MS-SNSD for diverse acoustic environments. Evaluation using PESQ and STOI metrics shows superior performance, particularly under extreme noise conditions. To ensure deployment on resource-constrained devices like TWS earbuds, we explore quantization techniques to reduce computational requirements. This research highlights the effectiveness of customized AI algorithms for speech processing in Indian contexts and suggests future directions for optimizing edge-based architectures.
Similar Papers
Reverse Attention for Lightweight Speech Enhancement on Edge Devices
Audio and Speech Processing
Cleans up noisy sounds in voices.
Transformer Redesign for Late Fusion of Audio-Text Features on Ultra-Low-Power Edge Hardware
Sound
Helps tiny computers understand feelings from voices.
Audio-Visual Speech Enhancement: Architectural Design and Deployment Strategies
Sound
Cleans up noisy phone calls using sound and faces.