Score: 1

VALA: Learning Latent Anchors for Training-Free and Temporally Consistent

Published: October 27, 2025 | arXiv ID: 2510.22970v1

By: Zhangkai Wu , Xuhui Fan , Zhongyuan Xie and more

Potential Business Impact:

Makes videos edit better and faster.

Business Areas:
Image Recognition Data and Analytics, Software

Recent advances in training-free video editing have enabled lightweight and precise cross-frame generation by leveraging pre-trained text-to-image diffusion models. However, existing methods often rely on heuristic frame selection to maintain temporal consistency during DDIM inversion, which introduces manual bias and reduces the scalability of end-to-end inference. In this paper, we propose~\textbf{VALA} (\textbf{V}ariational \textbf{A}lignment for \textbf{L}atent \textbf{A}nchors), a variational alignment module that adaptively selects key frames and compresses their latent features into semantic anchors for consistent video editing. To learn meaningful assignments, VALA propose a variational framework with a contrastive learning objective. Therefore, it can transform cross-frame latent representations into compressed latent anchors that preserve both content and temporal coherence. Our method can be fully integrated into training-free text-to-image based video editing models. Extensive experiments on real-world video editing benchmarks show that VALA achieves state-of-the-art performance in inversion fidelity, editing quality, and temporal consistency, while offering improved efficiency over prior methods.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition