Score: 3

SignRep: Enhancing Self-Supervised Sign Representations

Published: March 11, 2025 | arXiv ID: 2503.08529v1

By: Ryan Wong, Necati Cihan Camgoz, Richard Bowden

BigTech Affiliations: Meta

Potential Business Impact:

Teaches computers to understand sign language faster.

Business Areas:
Image Recognition Data and Analytics, Software

Sign language representation learning presents unique challenges due to the complex spatio-temporal nature of signs and the scarcity of labeled datasets. Existing methods often rely either on models pre-trained on general visual tasks, that lack sign-specific features, or use complex multimodal and multi-branch architectures. To bridge this gap, we introduce a scalable, self-supervised framework for sign representation learning. We leverage important inductive (sign) priors during the training of our RGB model. To do this, we leverage simple but important cues based on skeletons while pretraining a masked autoencoder. These sign specific priors alongside feature regularization and an adversarial style agnostic loss provide a powerful backbone. Notably, our model does not require skeletal keypoints during inference, avoiding the limitations of keypoint-based models during downstream tasks. When finetuned, we achieve state-of-the-art performance for sign recognition on the WLASL, ASL-Citizen and NMFs-CSL datasets, using a simpler architecture and with only a single-modality. Beyond recognition, our frozen model excels in sign dictionary retrieval and sign translation, surpassing standard MAE pretraining and skeletal-based representations in retrieval. It also reduces computational costs for training existing sign translation models while maintaining strong performance on Phoenix2014T, CSL-Daily and How2Sign.

Country of Origin
πŸ‡¬πŸ‡§ πŸ‡ΊπŸ‡Έ United States, United Kingdom

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition