Repurposing Video Diffusion Transformers for Robust Point Tracking
By: Soowon Son , Honggyu An , Chaehyun Kim and more
Point tracking aims to localize corresponding points across video frames, serving as a fundamental task for 4D reconstruction, robotics, and video editing. Existing methods commonly rely on shallow convolutional backbones such as ResNet that process frames independently, lacking temporal coherence and producing unreliable matching costs under challenging conditions. Through systematic analysis, we find that video Diffusion Transformers (DiTs), pre-trained on large-scale real-world videos with spatio-temporal attention, inherently exhibit strong point tracking capability and robustly handle dynamic motions and frequent occlusions. We propose DiTracker, which adapts video DiTs through: (1) query-key attention matching, (2) lightweight LoRA tuning, and (3) cost fusion with a ResNet backbone. Despite training with 8 times smaller batch size, DiTracker achieves state-of-the-art performance on challenging ITTO benchmark and matches or outperforms state-of-the-art models on TAP-Vid benchmarks. Our work validates video DiT features as an effective and efficient foundation for point tracking.
Similar Papers
Denoise to Track: Harnessing Video Diffusion Priors for Robust Correspondence
CV and Pattern Recognition
Tracks moving things in videos without training.
MultiMotion: Multi Subject Video Motion Transfer via Video Diffusion Transformer
CV and Pattern Recognition
Lets videos copy other videos' movements perfectly.
ResDiT: Evoking the Intrinsic Resolution Scalability in Diffusion Transformers
CV and Pattern Recognition
Makes AI create clearer, bigger pictures without errors.