Score: 0

Isolated Sign Language Recognition with Segmentation and Pose Estimation

Published: December 16, 2025 | arXiv ID: 2512.14876v1

By: Daniel Perkins , Davis Hunter , Dhrumil Patel and more

Potential Business Impact:

Helps computers understand sign language from videos.

Business Areas:
Image Recognition Data and Analytics, Software

The recent surge in large language models has automated translations of spoken and written languages. However, these advances remain largely inaccessible to American Sign Language (ASL) users, whose language relies on complex visual cues. Isolated sign language recognition (ISLR) - the task of classifying videos of individual signs - can help bridge this gap but is currently limited by scarce per-sign data, high signer variability, and substantial computational costs. We propose a model for ISLR that reduces computational requirements while maintaining robustness to signer variation. Our approach integrates (i) a pose estimation pipeline to extract hand and face joint coordinates, (ii) a segmentation module that isolates relevant information, and (iii) a ResNet-Transformer backbone to jointly model spatial and temporal dependencies.

Page Count
7 pages

Category
Computer Science:
CV and Pattern Recognition