UAV Position Estimation using a LiDAR-based 3D Object Detection Method
By: Uthman Olawoye, Jason N. Gross
Potential Business Impact:
Helps drones find ground robots without GPS.
This paper explores the use of applying a deep learning approach for 3D object detection to compute the relative position of an Unmanned Aerial Vehicle (UAV) from an Unmanned Ground Vehicle (UGV) equipped with a LiDAR sensor in a GPS-denied environment. This was achieved by evaluating the LiDAR sensor's data through a 3D detection algorithm (PointPillars). The PointPillars algorithm incorporates a column voxel point-cloud representation and a 2D Convolutional Neural Network (CNN) to generate distinctive point-cloud features representing the object to be identified, in this case, the UAV. The current localization method utilizes point-cloud segmentation, Euclidean clustering, and predefined heuristics to obtain the relative position of the UAV. Results from the two methods were then compared to a reference truth solution.
Similar Papers
Vision-based Lifting of 2D Object Detections for Automated Driving
CV and Pattern Recognition
Cars see in 3D using only cameras.
UAV Object Detection and Positioning in a Mining Industrial Metaverse with Custom Geo-Referenced Data
Image and Video Processing
Drones map mines for safer, smarter digging.
A Light Perspective for 3D Object Detection
CV and Pattern Recognition
Helps self-driving cars see better with less power.