TTOM: Test-Time Optimization and Memorization for Compositional Video Generation
By: Leigang Qu , Ziyang Wang , Na Zheng and more
Potential Business Impact:
Makes AI videos follow instructions better.
Video Foundation Models (VFMs) exhibit remarkable visual generation performance, but struggle in compositional scenarios (e.g., motion, numeracy, and spatial relation). In this work, we introduce Test-Time Optimization and Memorization (TTOM), a training-free framework that aligns VFM outputs with spatiotemporal layouts during inference for better text-image alignment. Rather than direct intervention to latents or attention per-sample in existing work, we integrate and optimize new parameters guided by a general layout-attention objective. Furthermore, we formulate video generation within a streaming setting, and maintain historical optimization contexts with a parametric memory mechanism that supports flexible operations, such as insert, read, update, and delete. Notably, we found that TTOM disentangles compositional world knowledge, showing powerful transferability and generalization. Experimental results on the T2V-CompBench and Vbench benchmarks establish TTOM as an effective, practical, scalable, and efficient framework to achieve cross-modal alignment for compositional video generation on the fly.
Similar Papers
StreamingTOM: Streaming Token Compression for Efficient Video Understanding
CV and Pattern Recognition
Makes computers understand videos faster and cheaper.
ViT$^3$: Unlocking Test-Time Training in Vision
CV and Pattern Recognition
Makes computers understand pictures faster and better.
Time-to-Move: Training-Free Motion Controlled Video Generation via Dual-Clock Denoising
CV and Pattern Recognition
Makes videos move exactly how you want.