Learning Compact Latent Space for Representing Neural Signed Distance Functions with High-fidelity Geometry Details
By: Qiang Bai , Bojian Wu , Xi Yang and more
Potential Business Impact:
Lets computers create detailed 3D shapes from many examples.
Neural signed distance functions (SDFs) have been a vital representation to represent 3D shapes or scenes with neural networks. An SDF is an implicit function that can query signed distances at specific coordinates for recovering a 3D surface. Although implicit functions work well on a single shape or scene, they pose obstacles when analyzing multiple SDFs with high-fidelity geometry details, due to the limited information encoded in the latent space for SDFs and the loss of geometry details. To overcome these obstacles, we introduce a method to represent multiple SDFs in a common space, aiming to recover more high-fidelity geometry details with more compact latent representations. Our key idea is to take full advantage of the benefits of generalization-based and overfitting-based learning strategies, which manage to preserve high-fidelity geometry details with compact latent codes. Based on this framework, we also introduce a novel sampling strategy to sample training queries. The sampling can improve the training efficiency and eliminate artifacts caused by the influence of other SDFs. We report numerical and visual evaluations on widely used benchmarks to validate our designs and show advantages over the latest methods in terms of the representative ability and compactness.
Similar Papers
Geometric implicit neural representations for signed distance functions
CV and Pattern Recognition
Builds 3D shapes from pictures and points.
Leveraging 2D Priors and SDF Guidance for Dynamic Urban Scene Rendering
CV and Pattern Recognition
Makes 3D scenes look real without extra sensors.
Implicit 3D scene reconstruction using deep learning towards efficient collision understanding in autonomous driving
CV and Pattern Recognition
Helps self-driving cars see obstacles better.