Learnability-Driven Submodular Optimization for Active Roadside 3D Detection
By: Ruiyu Mao , Baoming Zhang , Nicholas Ruozzi and more
Potential Business Impact:
Helps self-driving cars learn from roadside cameras.
Roadside perception datasets are typically constructed via cooperative labeling between synchronized vehicle and roadside frame pairs. However, real deployment often requires annotation of roadside-only data due to hardware and privacy constraints. Even human experts struggle to produce accurate labels without vehicle-side data (image, LIDAR), which not only increases annotation difficulty and cost, but also reveals a fundamental learnability problem: many roadside-only scenes contain distant, blurred, or occluded objects whose 3D properties are ambiguous from a single view and can only be reliably annotated by cross-checking paired vehicle--roadside frames. We refer to such cases as inherently ambiguous samples. To reduce wasted annotation effort on inherently ambiguous samples while still obtaining high-performing models, we turn to active learning. This work focuses on active learning for roadside monocular 3D object detection and proposes a learnability-driven framework that selects scenes which are both informative and reliably labelable, suppressing inherently ambiguous samples while ensuring coverage. Experiments demonstrate that our method, LH3D, achieves 86.06%, 67.32%, and 78.67% of full-performance for vehicles, pedestrians, and cyclists respectively, using only 25% of the annotation budget on DAIR-V2X-I, significantly outperforming uncertainty-based baselines. This confirms that learnability, not uncertainty, matters for roadside 3D perception.
Similar Papers
IROAM: Improving Roadside Monocular 3D Object Detection Learning from Autonomous Vehicle Data Domain
CV and Pattern Recognition
Helps self-driving cars see better from the road.
2.5D Object Detection for Intelligent Roadside Infrastructure
CV and Pattern Recognition
Helps self-driving cars see better from the road.
MoniRefer: A Real-world Large-scale Multi-modal Dataset based on Roadside Infrastructure for 3D Visual Grounding
CV and Pattern Recognition
Helps cameras find objects using words.