Performance-guided Reinforced Active Learning for Object Detection
By: Zhixuan Liang , Xingyu Zeng , Rui Zhao and more
Potential Business Impact:
Teaches computers to find objects with fewer pictures.
Active learning (AL) strategies aim to train high-performance models with minimal labeling efforts, only selecting the most informative instances for annotation. Current approaches to evaluating data informativeness predominantly focus on the data's distribution or intrinsic information content and do not directly correlate with downstream task performance, such as mean average precision (mAP) in object detection. Thus, we propose Performance-guided (i.e. mAP-guided) Reinforced Active Learning for Object Detection (MGRAL), a novel approach that leverages the concept of expected model output changes as informativeness. To address the combinatorial explosion challenge of batch sample selection and the non-differentiable correlation between model performance and selected batches, MGRAL skillfully employs a reinforcement learning-based sampling agent that optimizes selection using policy gradient with mAP improvement as reward. Moreover, to reduce the computational overhead of mAP estimation with unlabeled samples, MGRAL utilizes an unsupervised way with fast look-up tables, ensuring feasible deployment. We evaluate MGRAL's active learning performance on detection tasks over PASCAL VOC and COCO benchmarks. Our approach demonstrates the highest AL curve with convincing visualizations, establishing a new paradigm in reinforcement learning-driven active object detection.
Similar Papers
Active Learning Methods for Efficient Data Utilization and Model Performance Enhancement
Machine Learning (CS)
Teaches computers to learn with less examples.
Box-Level Class-Balanced Sampling for Active Object Detection
CV and Pattern Recognition
Teaches computers to find objects better with less work.
GRAIL: A Benchmark for GRaph ActIve Learning in Dynamic Sensing Environments
Machine Learning (CS)
Helps computers learn faster with less data.