VocSegMRI: Multimodal Learning for Precise Vocal Tract Segmentation in Real-time MRI
By: Daiqi Liu , Tomás Arias-Vergara , Johannes Enk and more
Potential Business Impact:
Helps doctors see inside mouths better.
Accurately segmenting articulatory structures in real-time magnetic resonance imaging (rtMRI) remains challenging, as most existing methods rely almost entirely on visual cues. Yet synchronized acoustic and phonological signals provide complementary context that can enrich visual information and improve precision. In this paper, we introduce VocSegMRI, a multimodal framework that integrates video, audio, and phonological inputs through cross-attention fusion for dynamic feature alignment. To further enhance cross-modal representation, we incorporate a contrastive learning objective that improves segmentation performance even when the audio modality is unavailable at inference. Evaluated on a sub-set of USC-75 rtMRI dataset, our approach achieves state-of-the-art performance, with a Dice score of 0.95 and a 95th percentile Hausdorff Distance (HD_95) of 4.20 mm, outperforming both unimodal and multimodal baselines. Ablation studies confirm the contributions of cross-attention and contrastive learning to segmentation precision and robustness. These results highlight the value of integrative multimodal modeling for accurate vocal tract analysis.
Similar Papers
VocSegMRI: Multimodal Learning for Precise Vocal Tract Segmentation in Real-time MRI
CV and Pattern Recognition
Helps doctors see
Towards disentangling the contributions of articulation and acoustics in multimodal phoneme recognition
Machine Learning (CS)
Helps computers understand how we talk better.
Audio-Vision Contrastive Learning for Phonological Class Recognition
Sound
Helps understand how people talk by watching mouths.