A Study of Finetuning Video Transformers for Multi-view Geometry Tasks
By: Huimin Wu , Kwang-Ting Cheng , Stephen Lin and more
This paper presents an investigation of vision transformer learning for multi-view geometry tasks, such as optical flow estimation, by fine-tuning video foundation models. Unlike previous methods that involve custom architectural designs and task-specific pretraining, our research finds that general-purpose models pretrained on videos can be readily transferred to multi-view problems with minimal adaptation. The core insight is that general-purpose attention between patches learns temporal and spatial information for geometric reasoning. We demonstrate that appending a linear decoder to the Transformer backbone produces satisfactory results, and iterative refinement can further elevate performance to stateof-the-art levels. This conceptually simple approach achieves top cross-dataset generalization results for optical flow estimation with end-point error (EPE) of 0.69, 1.78, and 3.15 on the Sintel clean, Sintel final, and KITTI datasets, respectively. Our method additionally establishes a new record on the online test benchmark with EPE values of 0.79, 1.88, and F1 value of 3.79. Applications to 3D depth estimation and stereo matching also show strong performance, illustrating the versatility of video-pretrained models in addressing geometric vision tasks.
Similar Papers
Epipolar Geometry Improves Video Generation Models
CV and Pattern Recognition
Makes videos look real by fixing shaky camera moves.
Improving Video Diffusion Transformer Training by Multi-Feature Fusion and Alignment from Self-Supervised Vision Encoders
CV and Pattern Recognition
Makes AI videos look more real and smooth.
Flow and Depth Assisted Video Prediction with Latent Transformer
CV and Pattern Recognition
Helps computers guess what's hidden in videos.