VisCoder2: Building Multi-Language Visualization Coding Agents
By: Yuansheng Ni , Songcheng Cai , Xiangchao Chen and more
Potential Business Impact:
Helps computers make better charts and graphs.
Large language models (LLMs) have recently enabled coding agents capable of generating, executing, and revising visualization code. However, existing models often fail in practical workflows due to limited language coverage, unreliable execution, and lack of iterative correction mechanisms. Progress has been constrained by narrow datasets and benchmarks that emphasize single-round generation and single-language tasks. To address these challenges, we introduce three complementary resources for advancing visualization coding agents. VisCode-Multi-679K is a large-scale, supervised dataset containing 679K validated and executable visualization samples with multi-turn correction dialogues across 12 programming languages. VisPlotBench is a benchmark for systematic evaluation, featuring executable tasks, rendered outputs, and protocols for both initial generation and multi-round self-debug. Finally, we present VisCoder2, a family of multi-language visualization models trained on VisCode-Multi-679K. Experiments show that VisCoder2 significantly outperforms strong open-source baselines and approaches the performance of proprietary models like GPT-4.1, with further gains from iterative self-debug, reaching 82.4% overall execution pass rate at the 32B scale, particularly in symbolic or compiler-dependent languages.
Similar Papers
VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation
Software Engineering
Helps computers draw accurate charts from words.
VisCodex: Unified Multimodal Code Generation via Merging Vision and Coding Models
Computation and Language
Helps computers write code from pictures.
MathCoder-VL: Bridging Vision and Code for Enhanced Multimodal Mathematical Reasoning
CV and Pattern Recognition
Teaches computers to solve math problems with pictures.