Aligned Contrastive Loss for Long-Tailed Recognition
By: Jiali Ma , Jiequan Cui , Maeno Kazuki and more
Potential Business Impact:
Teaches computers to recognize rare things better.
In this paper, we propose an Aligned Contrastive Learning (ACL) algorithm to address the long-tailed recognition problem. Our findings indicate that while multi-view training boosts the performance, contrastive learning does not consistently enhance model generalization as the number of views increases. Through theoretical gradient analysis of supervised contrastive learning (SCL), we identify gradient conflicts, and imbalanced attraction and repulsion gradients between positive and negative pairs as the underlying issues. Our ACL algorithm is designed to eliminate these problems and demonstrates strong performance across multiple benchmarks. We validate the effectiveness of ACL through experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist datasets. Results show that ACL achieves new state-of-the-art performance.
Similar Papers
Solving the long-tailed distribution problem by exploiting the synergies and balance of different techniques
CV and Pattern Recognition
Helps computers learn rare things better.
MACL: Multi-Label Adaptive Contrastive Learning Loss for Remote Sensing Image Retrieval
CV and Pattern Recognition
Finds rare things in satellite pictures better.
Rethinking Contrastive Learning in Session-based Recommendation
Information Retrieval
Finds what you want to buy next.