Long-Tailed Recognition via Information-Preservable Two-Stage Learning
By: Fudong Lin, Xu Yuan
Potential Business Impact:
Helps computers learn from rare examples better.
The imbalance (or long-tail) is the nature of many real-world data distributions, which often induces the undesirable bias of deep classification models toward frequent classes, resulting in poor performance for tail classes. In this paper, we propose a novel two-stage learning approach to mitigate such a majority-biased tendency while preserving valuable information within datasets. Specifically, the first stage proposes a new representation learning technique from the information theory perspective. This approach is theoretically equivalent to minimizing intra-class distance, yielding an effective and well-separated feature space. The second stage develops a novel sampling strategy that selects mathematically informative instances, able to rectify majority-biased decision boundaries without compromising a model's overall performance. As a result, our approach achieves the state-of-the-art performance across various long-tailed benchmark datasets, validated via extensive experiments. Our code is available at https://github.com/fudong03/BNS_IPDPP.
Similar Papers
Rethinking Long-tailed Dataset Distillation: A Uni-Level Framework with Unbiased Recovery and Relabeling
CV and Pattern Recognition
Teaches computers to learn better from messy data.
Classifying Long-tailed and Label-noise Data via Disentangling and Unlearning
Machine Learning (CS)
Fixes computer learning from messy, rare data.
Mixture of Balanced Information Bottlenecks for Long-Tailed Visual Recognition
CV and Pattern Recognition
Helps computers recognize many things, even rare ones.