VISTA: A Test-Time Self-Improving Video Generation Agent
By: Do Xuan Long , Xingchen Wan , Hootan Nakhost and more
Potential Business Impact:
Makes videos better by fixing the instructions.
Despite rapid advances in text-to-video synthesis, generated video quality remains critically dependent on precise user prompts. Existing test-time optimization methods, successful in other domains, struggle with the multi-faceted nature of video. In this work, we introduce VISTA (Video Iterative Self-improvemenT Agent), a novel multi-agent system that autonomously improves video generation through refining prompts in an iterative loop. VISTA first decomposes a user idea into a structured temporal plan. After generation, the best video is identified through a robust pairwise tournament. This winning video is then critiqued by a trio of specialized agents focusing on visual, audio, and contextual fidelity. Finally, a reasoning agent synthesizes this feedback to introspectively rewrite and enhance the prompt for the next generation cycle. Experiments on single- and multi-scene video generation scenarios show that while prior methods yield inconsistent gains, VISTA consistently improves video quality and alignment with user intent, achieving up to 60% pairwise win rate against state-of-the-art baselines. Human evaluators concur, preferring VISTA outputs in 66.4% of comparisons.
Similar Papers
VISTA: A Vision and Intent-Aware Social Attention Framework for Multi-Agent Trajectory Prediction
CV and Pattern Recognition
Helps self-driving cars avoid crashing into each other.
Structured Prompting and Multi-Agent Knowledge Distillation for Traffic Video Interpretation and Risk Inference
CV and Pattern Recognition
Helps cars understand roads and dangers better.
VISTA: Generative Visual Imagination for Vision-and-Language Navigation
Robotics
Helps robots find things using imagination.