Model alignment using inter-modal bridges
By: Ali Gholamzadeh, Noor Sajid
Potential Business Impact:
Lets different AI skills work together easily.
Foundation models have demonstrated remarkable performance across modalities such as language and vision. However, model reuse across distinct modalities (e.g., text and vision) remains limited due to the difficulty of aligning internal representations. Existing methods require extensive paired training data or are constrained to specific domains. We introduce a semi-supervised approach for model alignment via conditional flow matching. The conditional flow between latent spaces of different modalities (e.g., text-to-image or biological-to-artificial neuronal activity) can be learned in two settings: ($1$) solving a (balanced or unbalanced) optimal transport problem with an inter-space bridge cost, and ($2$) performing memory-efficient alignment using labelled exemplars. Despite being constrained by the original models' capacity, our method--under both settings--matches downstream task performance of end-to-end trained models on object recognition and image generation tasks across MNIST, ImageNet, and \cite{majaj2015simple} datasets, particularly when labelled training data is scarce ($<20\%$). Our method provides a data-efficient solution for inter-modal model alignment with minimal supervision.
Similar Papers
Semantic Alignment of Unimodal Medical Text and Vision Representations
CV and Pattern Recognition
Connects AI to read X-rays better.
Multimodal Representation Alignment for Cross-modal Information Retrieval
Information Retrieval
Finds matching pictures for words, and words for pictures.
Self-Supervised Spatial Correspondence Across Modalities
CV and Pattern Recognition
Matches points in different kinds of pictures.