Spatio-temporal Sign Language Representation and Translation
By: Yasser Hamidullah, Josef van Genabith, Cristina España-Bonet
Potential Business Impact:
Translates sign language into written words.
This paper describes the DFKI-MLT submission to the WMT-SLT 2022 sign language translation (SLT) task from Swiss German Sign Language (video) into German (text). State-of-the-art techniques for SLT use a generic seq2seq architecture with customized input embeddings. Instead of word embeddings as used in textual machine translation, SLT systems use features extracted from video frames. Standard approaches often do not benefit from temporal features. In our participation, we present a system that learns spatio-temporal feature representations and translation in a single model, resulting in a real end-to-end architecture expected to better generalize to new data sets. Our best system achieved $5\pm1$ BLEU points on the development set, but the performance on the test dropped to $0.11\pm0.06$ BLEU points.
Similar Papers
Sign Language Translation with Sentence Embedding Supervision
Computation and Language
Teaches computers to translate sign language without labels.
SONAR-SLT: Multilingual Sign Language Translation via Language-Agnostic Sentence Embedding Supervision
Computation and Language
Translates sign language to many spoken languages.
MultiStream-LLM: Bridging Modalities for Robust Sign Language Translation
Computation and Language
Translates sign language better by using special parts.