VisCodex: Unified Multimodal Code Generation via Merging Vision and Coding Models
By: Lingjie Jiang , Shaohan Huang , Xun Wu and more
Potential Business Impact:
Helps computers write code from pictures.
Multimodal large language models (MLLMs) have significantly advanced the integration of visual and textual understanding. However, their ability to generate code from multimodal inputs remains limited. In this work, we introduce VisCodex, a unified framework that seamlessly merges vision and coding language models to empower MLLMs with strong multimodal code generation abilities. Leveraging a task vector-based model merging technique, we integrate a state-of-the-art coding LLM into a strong vision-language backbone, while preserving both visual comprehension and advanced coding skills. To support training and evaluation, we introduce the Multimodal Coding Dataset (MCD), a large-scale and diverse collection of 598k samples, including high-quality HTML code, chart image-code pairs, image-augmented StackOverflow QA, and algorithmic problems. Furthermore, we propose InfiBench-V, a novel and challenging benchmark specifically designed to assess models on visually-rich, real-world programming questions that demand a nuanced understanding of both textual and visual contexts. Extensive experiments show that VisCodex achieves state-of-the-art performance among open-source MLLMs and approaches proprietary models like GPT-4o, highlighting the effectiveness of our model merging strategy and new datasets.
Similar Papers
VisCoder2: Building Multi-Language Visualization Coding Agents
Software Engineering
Helps computers make better charts and graphs.
MathCoder-VL: Bridging Vision and Code for Enhanced Multimodal Mathematical Reasoning
CV and Pattern Recognition
Teaches computers to solve math problems with pictures.
Multilingual Multimodal Software Developer for Code Generation
Computation and Language
Helps computers write code from pictures.