LCGC: Learning from Consistency Gradient Conflicting for Class-Imbalanced Semi-Supervised Debiasing
By: Weiwei Xing , Yue Cheng , Hongzhu Yi and more
Potential Business Impact:
Fixes computer guesses when data is unfair.
Classifiers often learn to be biased corresponding to the class-imbalanced dataset, especially under the semi-supervised learning (SSL) set. While previous work tries to appropriately re-balance the classifiers by subtracting a class-irrelevant image's logit, but lacks a firm theoretical basis. We theoretically analyze why exploiting a baseline image can refine pseudo-labels and prove that the black image is the best choice. We also indicated that as the training process deepens, the pseudo-labels before and after refinement become closer. Based on this observation, we propose a debiasing scheme dubbed LCGC, which Learning from Consistency Gradient Conflicting, by encouraging biased class predictions during training. We intentionally update the pseudo-labels whose gradient conflicts with the debiased logits, representing the optimization direction offered by the over-imbalanced classifier predictions. Then, we debiased the predictions by subtracting the baseline image logits during testing. Extensive experiments demonstrate that LCGC can significantly improve the prediction accuracy of existing CISSL models on public benchmarks.
Similar Papers
Bi-CoG: Bi-Consistency-Guided Self-Training for Vision-Language Models
Machine Learning (CS)
Makes AI learn better with less labeled examples.
DebGCD: Debiased Learning with Distribution Guidance for Generalized Category Discovery
CV and Pattern Recognition
Helps computers learn about new things they haven't seen.
Sampling Control for Imbalanced Calibration in Semi-Supervised Learning
Machine Learning (CS)
Fixes computer learning when some groups are rare.