Real-Time Sign Language to text Translation using Deep Learning: A Comparative study of LSTM and 3D CNN
By: Madhumati Pol , Anvay Anturkar , Anushka Khot and more
Potential Business Impact:
Helps computers understand sign language in real-time.
This study investigates the performance of 3D Convolutional Neural Networks (3D CNNs) and Long Short-Term Memory (LSTM) networks for real-time American Sign Language (ASL) recognition. Though 3D CNNs are good at spatiotemporal feature extraction from video sequences, LSTMs are optimized for modeling temporal dependencies in sequential data. We evaluate both architectures on a dataset containing 1,200 ASL signs across 50 classes, comparing their accuracy, computational efficiency, and latency under similar training conditions. Experimental results demonstrate that 3D CNNs achieve 92.4% recognition accuracy but require 3.2% more processing time per frame compared to LSTMs, which maintain 86.7% accuracy with significantly lower resource consumption. The hybrid 3D CNNLSTM model shows decent performance, which suggests that context-dependent architecture selection is crucial for practical implementation.This project provides professional benchmarks for developing assistive technologies, highlighting trade-offs between recognition precision and real-time operational requirements in edge computing environments.
Similar Papers
Real-Time Sign Language Gestures to Speech Transcription using Deep Learning
CV and Pattern Recognition
Translates sign language into speech instantly.
A Comparative Analysis of Recurrent and Attention Architectures for Isolated Sign Language Recognition
Computation and Language
Helps computers understand sign language better.
SLRNet: A Real-Time LSTM-Based Sign Language Recognition System
CV and Pattern Recognition
Lets computers understand sign language from your webcam.