Topological Alignment of Shared Vision-Language Embedding Space
By: Junwon You, Dasol Kang, Jae-Hun Jung
Potential Business Impact:
Makes AI understand pictures in many languages.
Contrastive Vision-Language Models (VLMs) have demonstrated strong zero-shot capabilities. However, their cross-modal alignment remains biased toward English due to limited multilingual multimodal data. Recent multilingual extensions have alleviated this gap but enforce instance-level alignment while neglecting the global geometry of the shared embedding space. We address this problem by introducing ToMCLIP (Topological Alignment for Multilingual CLIP), a topology-aware framework aligning embedding spaces with topology-preserving constraints. The proposed method applies persistent homology to define a topological alignment loss and approximates persistence diagram with theoretical error bounds using graph sparsification strategy. This work validates the proposed approach, showing enhanced structural coherence of multilingual representations, higher zero-shot accuracy on the CIFAR-100, and stronger multilingual retrieval performance on the xFlickr&CO. Beyond VLMs, the proposed approach provides a general method for incorporating topological alignment into representation learning.
Similar Papers
Topology-Aware CLIP Few-Shot Learning
CV and Pattern Recognition
Helps AI learn new things with fewer examples.
Enhancing CLIP Robustness via Cross-Modality Alignment
CV and Pattern Recognition
Protects AI from tricky fake pictures.
uCLIP: Parameter-Efficient Multilingual Extension of Vision-Language Models with Unpaired Data
CV and Pattern Recognition
Helps computers understand pictures in many languages.