Isolated Sign Language Recognition with Segmentation and Pose Estimation
By: Daniel Perkins , Davis Hunter , Dhrumil Patel and more
Potential Business Impact:
Helps computers understand sign language from videos.
The recent surge in large language models has automated translations of spoken and written languages. However, these advances remain largely inaccessible to American Sign Language (ASL) users, whose language relies on complex visual cues. Isolated sign language recognition (ISLR) - the task of classifying videos of individual signs - can help bridge this gap but is currently limited by scarce per-sign data, high signer variability, and substantial computational costs. We propose a model for ISLR that reduces computational requirements while maintaining robustness to signer variation. Our approach integrates (i) a pose estimation pipeline to extract hand and face joint coordinates, (ii) a segmentation module that isolates relevant information, and (iii) a ResNet-Transformer backbone to jointly model spatial and temporal dependencies.
Similar Papers
Data-Efficient American Sign Language Recognition via Few-Shot Prototypical Networks
CV and Pattern Recognition
Teaches computers to understand rare sign language.
RoCoISLR: A Romanian Corpus for Isolated Sign Language Recognition
CV and Pattern Recognition
Helps computers understand Romanian sign language.
SegSLR: Promptable Video Segmentation for Isolated Sign Language Recognition
CV and Pattern Recognition
Helps computers understand sign language better.