Score: 1

Learning Scene-Level Signed Directional Distance Function with Ellipsoidal Priors and Neural Residuals

Published: March 25, 2025 | arXiv ID: 2503.20066v1

By: Zhirui Dai , Hojoon Shin , Yulun Tian and more

Potential Business Impact:

Helps robots map and move through spaces.

Business Areas:
Navigation Navigation and Mapping

Dense geometric environment representations are critical for autonomous mobile robot navigation and exploration. Recent work shows that implicit continuous representations of occupancy, signed distance, or radiance learned using neural networks offer advantages in reconstruction fidelity, efficiency, and differentiability over explicit discrete representations based on meshes, point clouds, and voxels. In this work, we explore a directional formulation of signed distance, called signed directional distance function (SDDF). Unlike signed distance function (SDF) and similar to neural radiance fields (NeRF), SDDF has a position and viewing direction as input. Like SDF and unlike NeRF, SDDF directly provides distance to the observed surface along the direction, rather than integrating along the view ray, allowing efficient view synthesis. To learn and predict scene-level SDDF efficiently, we develop a differentiable hybrid representation that combines explicit ellipsoid priors and implicit neural residuals. This approach allows the model to effectively handle large distance discontinuities around obstacle boundaries while preserving the ability for dense high-fidelity prediction. We show that SDDF is competitive with the state-of-the-art neural implicit scene models in terms of reconstruction accuracy and rendering efficiency, while allowing differentiable view prediction for robot trajectory optimization.

Country of Origin
🇺🇸 United States

Page Count
19 pages

Category
Computer Science:
Robotics