Active3D: Active High-Fidelity 3D Reconstruction via Hierarchical Uncertainty Quantification
By: Yan Li, Yingzhao Li, Gim Hee Lee
Potential Business Impact:
Builds 3D models of things faster and better.
In this paper, we present an active exploration framework for high-fidelity 3D reconstruction that incrementally builds a multi-level uncertainty space and selects next-best-views through an uncertainty-driven motion planner. We introduce a hybrid implicit-explicit representation that fuses neural fields with Gaussian primitives to jointly capture global structural priors and locally observed details. Based on this hybrid state, we derive a hierarchical uncertainty volume that quantifies both implicit global structure quality and explicit local surface confidence. To focus optimization on the most informative regions, we propose an uncertainty-driven keyframe selection strategy that anchors high-entropy viewpoints as sparse attention nodes, coupled with a viewpoint-space sliding window for uncertainty-aware local refinement. The planning module formulates next-best-view selection as an Expected Hybrid Information Gain problem and incorporates a risk-sensitive path planner to ensure efficient and safe exploration. Extensive experiments on challenging benchmarks demonstrate that our approach consistently achieves state-of-the-art accuracy, completeness, and rendering quality, highlighting its effectiveness for real-world active reconstruction and robotic perception tasks.
Similar Papers
ActivePose: Active 6D Object Pose Estimation and Tracking for Robotic Manipulation
Robotics
Robots learn to see and grab things better.
Peering into the Unknown: Active View Selection with Neural Uncertainty Maps for 3D Reconstruction
CV and Pattern Recognition
Helps computers build 3D shapes with fewer pictures.
OUGS: Active View Selection via Object-aware Uncertainty Estimation in 3DGS
CV and Pattern Recognition
Makes 3D object pictures clearer and faster.