Pixel-Perfect Visual Geometry Estimation
By: Gangwei Xu , Haotong Lin , Hongcheng Luo and more
Potential Business Impact:
Makes 3D pictures from photos without errors.
Recovering clean and accurate geometry from images is essential for robotics and augmented reality. However, existing geometry foundation models still suffer severely from flying pixels and the loss of fine details. In this paper, we present pixel-perfect visual geometry models that can predict high-quality, flying-pixel-free point clouds by leveraging generative modeling in the pixel space. We first introduce Pixel-Perfect Depth (PPD), a monocular depth foundation model built upon pixel-space diffusion transformers (DiT). To address the high computational complexity associated with pixel-space diffusion, we propose two key designs: 1) Semantics-Prompted DiT, which incorporates semantic representations from vision foundation models to prompt the diffusion process, preserving global semantics while enhancing fine-grained visual details; and 2) Cascade DiT architecture that progressively increases the number of image tokens, improving both efficiency and accuracy. To further extend PPD to video (PPVD), we introduce a new Semantics-Consistent DiT, which extracts temporally consistent semantics from a multi-view geometry foundation model. We then perform reference-guided token propagation within the DiT to maintain temporal coherence with minimal computational and memory overhead. Our models achieve the best performance among all generative monocular and video depth estimation models and produce significantly cleaner point clouds than all other models.
Similar Papers
Pixel-Perfect Depth with Semantics-Prompted Diffusion Transformers
CV and Pattern Recognition
Makes 3D pictures from single photos clearer.
PixelDiT: Pixel Diffusion Transformers for Image Generation
CV and Pattern Recognition
Makes AI create clearer, more detailed pictures.
DiP: Taming Diffusion Models in Pixel Space
CV and Pattern Recognition
Creates detailed pictures much faster.