Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models
By: Thanh-Dat Truong , Huu-Thien Tran , Tran Thai Son and more
Potential Business Impact:
Teaches AI to better understand pictures and words together.
Large multimodal models (LMMs) have gained impressive performance due to their outstanding capability in various understanding tasks. However, these models still suffer from some fundamental limitations related to robustness and generalization due to the alignment and correlation between visual and textual features. In this paper, we introduce a simple but efficient learning mechanism for improving the robust alignment between visual and textual modalities by solving shuffling problems. In particular, the proposed approach can improve reasoning capability, visual understanding, and cross-modality alignment by introducing two new tasks: reconstructing the image order and the text order into the LMM's pre-training and fine-tuning phases. In addition, we propose a new directed-token approach to capture visual and textual knowledge, enabling the capability to reconstruct the correct order of visual inputs. Then, we introduce a new Image-to-Response Guided loss to further improve the visual understanding of the LMM in its responses. The proposed approach consistently achieves state-of-the-art (SoTA) performance compared with prior LMMs on academic task-oriented and instruction-following LMM benchmarks.
Similar Papers
MMTok: Multimodal Coverage Maximization for Efficient Inference of VLMs
CV and Pattern Recognition
Makes AI understand pictures faster and better.
Some Modalities are More Equal Than Others: Decoding and Architecting Multimodal Integration in MLLMs
CV and Pattern Recognition
Teaches AI to trust the right information.
Direct Visual Grounding by Directing Attention of Visual Tokens
CV and Pattern Recognition
Makes AI better at answering questions about pictures.