Super LiDAR Reflectance for Robotic Perception
By: Wei Gao , Jie Zhang , Mingle Zhao and more
Potential Business Impact:
Makes cheap sensors see like expensive ones.
Conventionally, human intuition often defines vision as a modality of passive optical sensing, while active optical sensing is typically regarded as measuring rather than the default modality of vision. However, the situation now changes: sensor technologies and data-driven paradigms empower active optical sensing to redefine the boundaries of vision, ushering in a new era of active vision. Light Detection and Ranging (LiDAR) sensors capture reflectance from object surfaces, which remains invariant under varying illumination conditions, showcasing significant potential in robotic perception tasks such as detection, recognition, segmentation, and Simultaneous Localization and Mapping (SLAM). These applications often rely on dense sensing capabilities, typically achieved by high-resolution, expensive LiDAR sensors. A key challenge with low-cost LiDARs lies in the sparsity of scan data, which limits their broader application. To address this limitation, this work introduces an innovative framework for generating dense LiDAR reflectance images from sparse data, leveraging the unique attributes of non-repeating scanning LiDAR (NRS-LiDAR). We tackle critical challenges, including reflectance calibration and the transition from static to dynamic scene domains, facilitating the reconstruction of dense reflectance images in real-world settings. The key contributions of this work include a comprehensive dataset for LiDAR reflectance image densification, a densification network tailored for NRS-LiDAR, and diverse applications such as loop closure and traffic lane detection using the generated dense reflectance images.
Similar Papers
See and Beam: Leveraging LiDAR Sensing and Specular Surfaces for Indoor mmWave Connectivity
Networking and Internet Architecture
Makes Wi-Fi faster by bouncing signals off walls.
Machine Learning for LiDAR-Based Indoor Surface Classification in Intelligent Wireless Environments
Machine Learning (CS)
Helps Wi-Fi signals bounce better around rooms.
DepthVision: Robust Vision-Language Understanding through GAN-Based LiDAR-to-RGB Synthesis
Robotics
Helps robots see better in the dark.