Designing Practical Models for Isolated Word Visual Speech Recognition
By: Iason Ioannis Panagos, Giorgos Sfikas, Christophoros Nikou
Potential Business Impact:
Lets computers understand talking from lip movements.
Visual speech recognition (VSR) systems decode spoken words from an input sequence using only the video data. Practical applications of such systems include medical assistance as well as human-machine interactions. A VSR system is typically employed in a complementary role in cases where the audio is corrupt or not available. In order to accurately predict the spoken words, these architectures often rely on deep neural networks in order to extract meaningful representations from the input sequence. While deep architectures achieve impressive recognition performance, relying on such models incurs significant computation costs which translates into increased resource demands in terms of hardware requirements and results in limited applicability in real-world scenarios where resources might be constrained. This factor prevents wider adoption and deployment of speech recognition systems in more practical applications. In this work, we aim to alleviate this issue by developing architectures for VSR that have low hardware costs. Following the standard two-network design paradigm, where one network handles visual feature extraction and another one utilizes the extracted features to classify the entire sequence, we develop lightweight end-to-end architectures by first benchmarking efficient models from the image classification literature, and then adopting lightweight block designs in a temporal convolution network backbone. We create several unified models with low resource requirements but strong recognition performance. Experiments on the largest public database for English words demonstrate the effectiveness and practicality of our developed models. Code and trained models will be made publicly available.
Similar Papers
Scalable Frameworks for Real-World Audio-Visual Speech Recognition
Audio and Speech Processing
Helps computers understand speech even with noise.
Visual-Aware Speech Recognition for Noisy Scenarios
Computation and Language
Helps computers hear speech in noisy places.
Landmark Guided Visual Feature Extractor for Visual Speech Recognition with Limited Resource
CV and Pattern Recognition
Lets computers "hear" words from silent videos.