Transductive Visual Programming: Evolving Tool Libraries from Experience for Spatial Reasoning
By: Shengguang Wu , Xiaohan Wang , Yuhui Zhang and more
Potential Business Impact:
Learns to solve 3D puzzles by building its own tools.
Spatial reasoning in 3D scenes requires precise geometric calculations that challenge vision-language models. Visual programming addresses this by decomposing problems into steps calling specialized tools, yet existing methods rely on either fixed toolsets or speculative tool induction before solving problems, resulting in suboptimal programs and poor utilization of induced tools. We present Transductive Visual Programming (TVP), a novel framework that builds new tools from its own experience rather than speculation. TVP first solves problems using basic tools while accumulating experiential solutions into an Example Library, then abstracts recurring patterns from these programs into reusable higher-level tools for an evolving Tool Library. This allows TVP to tackle new problems with increasingly powerful tools learned from experience. On Omni3D-Bench, TVP achieves state-of-the-art performance, outperforming GPT-4o by 22% and the previous best visual programming system by 11%. Our transductively learned tools are used 5x more frequently as core program dependency than inductively created ones, demonstrating more effective tool discovery and reuse. The evolved tools also show strong generalization to unseen spatial tasks, achieving superior performance on benchmarks from SpatialScore-Hard collection without any testset-specific modification. Our work establishes experience-driven transductive tool creation as a powerful paradigm for building self-evolving visual programming agents that effectively tackle challenging spatial reasoning tasks. We release our code at https://transductive-visualprogram.github.io/.
Similar Papers
Enhancing Visual Programming for Visual Reasoning via Probabilistic Graphs
CV and Pattern Recognition
Helps computers solve visual puzzles better.
Video4Spatial: Towards Visuospatial Intelligence with Context-Guided Video Generation
CV and Pattern Recognition
Teaches computers to understand space from videos.
Generate, Transduct, Adapt: Iterative Transduction with VLMs
CV and Pattern Recognition
Teaches computers to understand new things without examples.