Chart-CoCa: Self-Improving Chart Understanding of Vision LMs via Code-Driven Synthesis and Candidate-Conditioned Answering
By: Gongyao Jiang, Qiong Luo
Potential Business Impact:
Teaches computers to read charts better.
Vision Language Models (VLMs) often struggle with chart understanding tasks, particularly in accurate chart description and complex reasoning. Synthetic data generation is a promising solution, while usually facing the challenge of noise labels. To address this challenge, we first introduce a chart synthesis pipeline that generates aligned chart-question-answer triplets through code generation and execution, ensuring the reliability of synthetic data without human intervention. Furthermore, inspired by test-time scaling that increases inference budget and thereby improves performance, we design a candidate-conditioned answering process. The VLM first generates multiple responses per query, and then synthesizes the final answer by contextualizing these candidates. Experiments demonstrate significant improvements, with up to 15.50 points accuracy gain over the initial VLM, in a fully self-improving paradigm without either human-labeled data or external models.
Similar Papers
Effective Training Data Synthesis for Improving MLLM Chart Understanding
CV and Pattern Recognition
Helps computers understand science graphs better.
Visual Programmability: A Guide for Code-as-Thought in Chart Understanding
CV and Pattern Recognition
Computers learn to read charts by choosing how to think.
Synthesizing Visual Concepts as Vision-Language Programs
Artificial Intelligence
Makes AI understand pictures and think logically.