VGG Induced Deep Hand Sign Language Detection
By: Subham Sharma, Sharmila Subudhi
Hand gesture recognition is an important aspect of human-computer interaction. It forms the basis of sign language for the visually impaired people. This work proposes a novel hand gesture recognizing system for the differently-abled persons. The model uses a convolutional neural network, known as VGG-16 net, for building a trained model on a widely used image dataset by employing Python and Keras libraries. Furthermore, the result is validated by the NUS dataset, consisting of 10 classes of hand gestures, fed to the model as the validation set. Afterwards, a testing dataset of 10 classes is built by employing Google's open source Application Programming Interface (API) that captures different gestures of human hand and the efficacy is then measured by carrying out experiments. The experimental results show that by combining a transfer learning mechanism together with the image data augmentation, the VGG-16 net produced around 98% accuracy.
Similar Papers
Real-Time Sign Language Gestures to Speech Transcription using Deep Learning
CV and Pattern Recognition
Translates sign language into speech instantly.
Visual Hand Gesture Recognition with Deep Learning: A Comprehensive Review of Methods, Datasets, Challenges and Future Research Directions
CV and Pattern Recognition
Helps computers understand hand signals better.
Indian Sign Language Detection for Real-Time Translation using Machine Learning
CV and Pattern Recognition
Helps deaf people talk to others using computers.