Scaling Unverifiable Rewards: A Case Study on Visual Insights
By: Shuyu Gan , James Mooney , Pan Hao and more
Potential Business Impact:
Makes AI better at complex, multi-step jobs.
Large Language Model (LLM) agents can increasingly automate complex reasoning through Test-Time Scaling (TTS), iterative refinement guided by reward signals. However, many real-world tasks involve multi-stage pipeline whose final outcomes lack verifiable rewards or sufficient data to train robust reward models, making judge-based refinement prone to accumulate error over stages. We propose Selective TTS, a process-based refinement framework that scales inference across different stages in multi-agent pipeline, instead of repeated refinement over time by prior work. By distributing compute across stages and pruning low-quality branches early using process-specific judges, Selective TTS mitigates the judge drift and stabilizes refinement. Grounded in the data science pipeline, we build an end-to-end multi-agent pipeline for generating visually insightful charts and report of given dataset, and design a reliable LLM-based judge model, aligned with human experts (Kendall's Ο=0.55). Our proposed selective TTS then improves insight quality under a fixed compute budget, increasing mean scores from 61.64 to 65.86 while reducing variance. We hope our findings serve as the first step toward to scaling complex, open-ended tasks with unverifiable rewards, such as scientific discovery and story generation.
Similar Papers
Limits and Gains of Test-Time Scaling in Vision-Language Reasoning
Machine Learning (CS)
Makes AI better at understanding pictures and words.
Step-level Verifier-guided Hybrid Test-Time Scaling for Large Language Models
Computation and Language
Makes AI think better without extra training.
AgentTTS: Large Language Model Agent for Test-time Compute-optimal Scaling Strategy in Complex Tasks
Artificial Intelligence
Boosts AI for multi-step complex tasks