Rig3R: Rig-Aware Conditioning for Learned 3D Reconstruction
By: Samuel Li , Pujith Kachana , Prajwal Chidananda and more
Potential Business Impact:
Helps robots understand 3D space better.
Estimating agent pose and 3D scene structure from multi-camera rigs is a central task in embodied AI applications such as autonomous driving. Recent learned approaches such as DUSt3R have shown impressive performance in multiview settings. However, these models treat images as unstructured collections, limiting effectiveness in scenarios where frames are captured from synchronized rigs with known or inferable structure. To this end, we introduce Rig3R, a generalization of prior multiview reconstruction models that incorporates rig structure when available, and learns to infer it when not. Rig3R conditions on optional rig metadata including camera ID, time, and rig poses to develop a rig-aware latent space that remains robust to missing information. It jointly predicts pointmaps and two types of raymaps: a pose raymap relative to a global frame, and a rig raymap relative to a rig-centric frame consistent across time. Rig raymaps allow the model to infer rig structure directly from input images when metadata is missing. Rig3R achieves state-of-the-art performance in 3D reconstruction, camera pose estimation, and rig discovery, outperforming both traditional and learned methods by 17-45% mAA across diverse real-world rig datasets, all in a single forward pass without post-processing or iterative refinement.
Similar Papers
Adapt3R: Adaptive 3D Scene Representation for Domain Transfer in Imitation Learning
CV and Pattern Recognition
Teaches robots to do new jobs without retraining.
Regist3R: Incremental Registration with Stereo Foundation Model
Image and Video Processing
Builds detailed 3D models from many pictures.
Test3R: Learning to Reconstruct 3D at Test Time
CV and Pattern Recognition
Makes 3D pictures more accurate using three views.