Score: 2

Long-Tailed Recognition via Information-Preservable Two-Stage Learning

Published: October 9, 2025 | arXiv ID: 2510.08836v1

By: Fudong Lin, Xu Yuan

Potential Business Impact:

Helps computers learn from rare examples better.

Business Areas:
A/B Testing Data and Analytics

The imbalance (or long-tail) is the nature of many real-world data distributions, which often induces the undesirable bias of deep classification models toward frequent classes, resulting in poor performance for tail classes. In this paper, we propose a novel two-stage learning approach to mitigate such a majority-biased tendency while preserving valuable information within datasets. Specifically, the first stage proposes a new representation learning technique from the information theory perspective. This approach is theoretically equivalent to minimizing intra-class distance, yielding an effective and well-separated feature space. The second stage develops a novel sampling strategy that selects mathematically informative instances, able to rectify majority-biased decision boundaries without compromising a model's overall performance. As a result, our approach achieves the state-of-the-art performance across various long-tailed benchmark datasets, validated via extensive experiments. Our code is available at https://github.com/fudong03/BNS_IPDPP.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
30 pages

Category
Computer Science:
Machine Learning (CS)