SONAR-SLT: Multilingual Sign Language Translation via Language-Agnostic Sentence Embedding Supervision
By: Yasser Hamidullah , Shakib Yazdani , Cennet Oguz and more
Potential Business Impact:
Translates sign language to many spoken languages.
Sign language translation (SLT) is typically trained with text in a single spoken language, which limits scalability and cross-language generalization. Earlier approaches have replaced gloss supervision with text-based sentence embeddings, but up to now, these remain tied to a specific language and modality. In contrast, here we employ language-agnostic, multimodal embeddings trained on text and speech from multiple languages to supervise SLT, enabling direct multilingual translation. To address data scarcity, we propose a coupled augmentation method that combines multilingual target augmentations (i.e. translations into many languages) with video-level perturbations, improving model robustness. Experiments show consistent BLEURT gains over text-only sentence embedding supervision, with larger improvements in low-resource settings. Our results demonstrate that language-agnostic embedding supervision, combined with coupled augmentation, provides a scalable and semantically robust alternative to traditional SLT training.
Similar Papers
Sign Language Translation with Sentence Embedding Supervision
Computation and Language
Teaches computers to translate sign language without labels.
Spatio-temporal Sign Language Representation and Translation
Computation and Language
Translates sign language into written words.
MultiStream-LLM: Bridging Modalities for Robust Sign Language Translation
Computation and Language
Translates sign language better by using special parts.