Towards Inclusive Communication: A Unified LLM-Based Framework for Sign Language, Lip Movements, and Audio Understanding
By: Jeong Hun Yeo , Hyeongseop Rha , Sungjune Park and more
Potential Business Impact:
Lets computers understand talking, lip reading, and sign language.
Audio is the primary modality for human communication and has driven the success of Automatic Speech Recognition (ASR) technologies. However, such systems remain inherently inaccessible to individuals who are deaf or hard of hearing. Visual alternatives such as sign language and lip reading offer effective substitutes, and recent advances in Sign Language Translation (SLT) and Visual Speech Recognition (VSR) have improved audio-less communication. Yet, these modalities have largely been studied in isolation, and their integration within a unified framework remains underexplored. In this paper, we introduce the first unified framework capable of handling diverse combinations of sign language, lip movements, and audio for spoken-language text generation. We focus on three main objectives: (i) designing a unified, modality-agnostic architecture capable of effectively processing heterogeneous inputs; (ii) exploring the underexamined synergy among modalities, particularly the role of lip movements as non-manual cues in sign language comprehension; and (iii) achieving performance on par with or superior to state-of-the-art models specialized for individual tasks. Building on this framework, we achieve performance on par with or better than task-specific state-of-the-art models across SLT, VSR, ASR, and AVSR. Furthermore, our analysis reveals that explicitly modeling lip movements as a separate modality significantly improves SLT performance.
Similar Papers
MultiStream-LLM: Bridging Modalities for Robust Sign Language Translation
Computation and Language
Translates sign language better by using special parts.
Omni-AVSR: Towards Unified Multimodal Speech Recognition with Large Language Models
Audio and Speech Processing
Lets one computer understand talking from sound and sight.
Reading to Listen at the Cocktail Party: Multi-Modal Speech Separation
Audio and Speech Processing
Cleans up noisy talking using sight and sound.