Decomposition Sampling for Efficient Region Annotations in Active Learning
By: Jingna Qiu , Frauke Wilm , Mathias Öttl and more
Potential Business Impact:
Helps doctors find rare diseases in scans.
Active learning improves annotation efficiency by selecting the most informative samples for annotation and model training. While most prior work has focused on selecting informative images for classification tasks, we investigate the more challenging setting of dense prediction, where annotations are more costly and time-intensive, especially in medical imaging. Region-level annotation has been shown to be more efficient than image-level annotation for these tasks. However, existing methods for representative annotation region selection suffer from high computational and memory costs, irrelevant region choices, and heavy reliance on uncertainty sampling. We propose decomposition sampling (DECOMP), a new active learning sampling strategy that addresses these limitations. It enhances annotation diversity by decomposing images into class-specific components using pseudo-labels and sampling regions from each class. Class-wise predictive confidence further guides the sampling process, ensuring that difficult classes receive additional annotations. Across ROI classification, 2-D segmentation, and 3-D segmentation, DECOMP consistently surpasses baseline methods by better sampling minority-class regions and boosting performance on these challenging classes. Code is in https://github.com/JingnaQiu/DECOMP.git.
Similar Papers
Label-Efficient Point Cloud Segmentation with Active Learning
CV and Pattern Recognition
Teaches computers to learn from less 3D data.
nnActive: A Framework for Evaluation of Active Learning in 3D Biomedical Segmentation
CV and Pattern Recognition
Helps doctors label medical scans faster.
Box-Level Class-Balanced Sampling for Active Object Detection
CV and Pattern Recognition
Teaches computers to find objects better with less work.