MUT3R: Motion-aware Updating Transformer for Dynamic 3D Reconstruction
By: Guole Shen , Tianchen Deng , Xingrui Qin and more
Potential Business Impact:
Fixes wobbly 3D pictures from moving cameras.
Recent stateful recurrent neural networks have achieved remarkable progress on static 3D reconstruction but remain vulnerable to motion-induced artifacts, where non-rigid regions corrupt attention propagation between the spatial memory and image feature. By analyzing the internal behaviors of the state and image token updating mechanism, we find that aggregating self-attention maps across layers reveals a consistent pattern: dynamic regions are naturally down-weighted, exposing an implicit motion cue that the pretrained transformer already encodes but never explicitly uses. Motivated by this observation, we introduce MUT3R, a training-free framework that applies the attention-derived motion cue to suppress dynamic content in the early layers of the transformer during inference. Our attention-level gating module suppresses the influence of dynamic regions before their artifacts propagate through the feature hierarchy. Notably, we do not retrain or fine-tune the model; we let the pretrained transformer diagnose its own motion cues and correct itself. This early regulation stabilizes geometric reasoning in streaming scenarios and leads to improvements in temporal consistency and camera pose robustness across multiple dynamic benchmarks, offering a simple and training-free pathway toward motion-aware streaming reconstruction.
Similar Papers
SelfMOTR: Revisiting MOTR with Self-Generating Detection Priors
CV and Pattern Recognition
Tracks moving objects better by using its own smart guesses.
4D3R: Motion-Aware Neural Reconstruction and Rendering of Dynamic Scenes from Monocular Videos
CV and Pattern Recognition
Creates realistic 3D videos from regular videos.
Consistent and Controllable Image Animation with Motion Linear Diffusion Transformers
CV and Pattern Recognition
Makes animated pictures move smoothly and look real.