LAA3D: A Benchmark of Detecting and Tracking Low-Altitude Aircraft in 3D Space
By: Hai Wu , Shuai Tang , Jiale Wang and more
Potential Business Impact:
Helps drones see and track other flying things.
Perception of Low-Altitude Aircraft (LAA) in 3D space enables precise 3D object localization and behavior understanding. However, datasets tailored for 3D LAA perception remain scarce. To address this gap, we present LAA3D, a large-scale dataset designed to advance 3D detection and tracking of low-altitude aerial vehicles. LAA3D contains 15,000 real images and 600,000 synthetic frames, captured across diverse scenarios, including urban and suburban environments. It covers multiple aerial object categories, including electric Vertical Take-Off and Landing (eVTOL) aircraft, Micro Aerial Vehicles (MAVs), and Helicopters. Each instance is annotated with 3D bounding box, class label, and instance identity, supporting tasks such as 3D object detection, 3D multi-object tracking (MOT), and 6-DoF pose estimation. Besides, we establish the LAA3D Benchmark, integrating multiple tasks and methods with unified evaluation protocols for comparison. Furthermore, we propose MonoLAA, a monocular 3D detection baseline, achieving robust 3D localization from zoom cameras with varying focal lengths. Models pretrained on synthetic images transfer effectively to real-world data with fine-tuning, demonstrating strong sim-to-real generalization. Our LAA3D provides a comprehensive foundation for future research in low-altitude 3D object perception.
Similar Papers
UAV-MM3D: A Large-Scale Synthetic Benchmark for 3D Perception of Unmanned Aerial Vehicles with Multi-Modal Data
CV and Pattern Recognition
Creates realistic drone videos for training AI.
LeAD-M3D: Leveraging Asymmetric Distillation for Real-time Monocular 3D Detection
CV and Pattern Recognition
Lets cameras see in 3D without extra sensors.
RIS-LAD: A Benchmark and Model for Referring Low-Altitude Drone Image Segmentation
CV and Pattern Recognition
Lets drones find objects from spoken words.