Score: 2

Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models

Published: August 19, 2025 | arXiv ID: 2508.14264v1

By: Thanh-Dat Truong , Huu-Thien Tran , Tran Thai Son and more

Potential Business Impact:

Teaches AI to better understand pictures and words together.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large multimodal models (LMMs) have gained impressive performance due to their outstanding capability in various understanding tasks. However, these models still suffer from some fundamental limitations related to robustness and generalization due to the alignment and correlation between visual and textual features. In this paper, we introduce a simple but efficient learning mechanism for improving the robust alignment between visual and textual modalities by solving shuffling problems. In particular, the proposed approach can improve reasoning capability, visual understanding, and cross-modality alignment by introducing two new tasks: reconstructing the image order and the text order into the LMM's pre-training and fine-tuning phases. In addition, we propose a new directed-token approach to capture visual and textual knowledge, enabling the capability to reconstruct the correct order of visual inputs. Then, we introduce a new Image-to-Response Guided loss to further improve the visual understanding of the LMM in its responses. The proposed approach consistently achieves state-of-the-art (SoTA) performance compared with prior LMMs on academic task-oriented and instruction-following LMM benchmarks.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡»πŸ‡³ Viet Nam, United States

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition