Youtu-VL: Unleashing Visual Potential via Unified Vision-Language Supervision
By: Zhixiang Wei , Yi Li , Zhehan Kan and more
Potential Business Impact:
Teaches computers to see and understand details better.
Despite the significant advancements represented by Vision-Language Models (VLMs), current architectures often exhibit limitations in retaining fine-grained visual information, leading to coarse-grained multimodal comprehension. We attribute this deficiency to a suboptimal training paradigm inherent in prevailing VLMs, which exhibits a text-dominant optimization bias by conceptualizing visual signals merely as passive conditional inputs rather than supervisory targets. To mitigate this, we introduce Youtu-VL, a framework leveraging the Vision-Language Unified Autoregressive Supervision (VLUAS) paradigm, which fundamentally shifts the optimization objective from ``vision-as-input'' to ``vision-as-target.'' By integrating visual tokens directly into the prediction stream, Youtu-VL applies unified autoregressive supervision to both visual details and linguistic content. Furthermore, we extend this paradigm to encompass vision-centric tasks, enabling a standard VLM to perform vision-centric tasks without task-specific additions. Extensive empirical evaluations demonstrate that Youtu-VL achieves competitive performance on both general multimodal tasks and vision-centric tasks, establishing a robust foundation for the development of comprehensive generalist visual agents.
Similar Papers
UFVideo: Towards Unified Fine-Grained Video Cooperative Understanding with Large Language Models
CV and Pattern Recognition
Lets computers understand videos at different levels.
UniFusion: Vision-Language Model as Unified Encoder in Image Generation
CV and Pattern Recognition
Makes pictures match words better for editing.
Representation Calibration and Uncertainty Guidance for Class-Incremental Learning based on Vision Language Model
CV and Pattern Recognition
Teaches computers to remember old and new pictures.