CoF-T2I: Video Models as Pure Visual Reasoners for Text-to-Image Generation
By: Chengzhuo Tong , Mingkun Chang , Shenglong Zhang and more
Recent video generation models have revealed the emergence of Chain-of-Frame (CoF) reasoning, enabling frame-by-frame visual inference. With this capability, video models have been successfully applied to various visual tasks (e.g., maze solving, visual puzzles). However, their potential to enhance text-to-image (T2I) generation remains largely unexplored due to the absence of a clearly defined visual reasoning starting point and interpretable intermediate states in the T2I generation process. To bridge this gap, we propose CoF-T2I, a model that integrates CoF reasoning into T2I generation via progressive visual refinement, where intermediate frames act as explicit reasoning steps and the final frame is taken as output. To establish such an explicit generation process, we curate CoF-Evol-Instruct, a dataset of CoF trajectories that model the generation process from semantics to aesthetics. To further improve quality and avoid motion artifacts, we enable independent encoding operation for each frame. Experiments show that CoF-T2I significantly outperforms the base video model and achieves competitive performance on challenging benchmarks, reaching 0.86 on GenEval and 7.468 on Imagine-Bench. These results indicate the substantial promise of video models for advancing high-quality text-to-image generation.
Similar Papers
Unified Video Editing with Temporal Reasoner
CV and Pattern Recognition
Edits videos precisely without needing masks.
When Thinking Drifts: Evidential Grounding for Robust Video Reasoning
CV and Pattern Recognition
Helps AI "see" and "think" better with videos.
Video Finetuning Improves Reasoning Between Frames
CV and Pattern Recognition
Helps computers understand video stories better.