Sampling Control for Imbalanced Calibration in Semi-Supervised Learning
By: Senmao Tian, Xiang Wei, Shunli Zhang
Potential Business Impact:
Fixes computer learning when some groups are rare.
Class imbalance remains a critical challenge in semi-supervised learning (SSL), especially when distributional mismatches between labeled and unlabeled data lead to biased classification. Although existing methods address this issue by adjusting logits based on the estimated class distribution of unlabeled data, they often handle model imbalance in a coarse-grained manner, conflating data imbalance with bias arising from varying class-specific learning difficulties. To address this issue, we propose a unified framework, SC-SSL, which suppresses model bias through decoupled sampling control. During training, we identify the key variables for sampling control under ideal conditions. By introducing a classifier with explicit expansion capability and adaptively adjusting sampling probabilities across different data distributions, SC-SSL mitigates feature-level imbalance for minority classes. In the inference phase, we further analyze the weight imbalance of the linear classifier and apply post-hoc sampling control with an optimization bias vector to directly calibrate the logits. Extensive experiments across various benchmark datasets and distribution settings validate the consistency and state-of-the-art performance of SC-SSL.
Similar Papers
CalibrateMix: Guided-Mixup Calibration of Image Semi-Supervised Models
CV and Pattern Recognition
Makes computer guesses more honest and accurate.
CaliMatch: Adaptive Calibration for Improving Safe Semi-supervised Learning
Machine Learning (CS)
Makes AI better at learning with less labeled data.
SeMi: When Imbalanced Semi-Supervised Learning Meets Mining Hard Examples
CV and Pattern Recognition
Helps computers learn better from messy, uneven data.