Score: 0

Revisiting Meta-Learning with Noisy Labels: Reweighting Dynamics and Theoretical Guarantees

Published: October 14, 2025 | arXiv ID: 2510.12209v1

By: Yiming Zhang , Chester Holtz , Gal Mishne and more

Potential Business Impact:

Cleans up messy data for smarter computer learning.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Learning with noisy labels remains challenging because over-parameterized networks memorize corrupted supervision. Meta-learning-based sample reweighting mitigates this by using a small clean subset to guide training, yet its behavior and training dynamics lack theoretical understanding. We provide a rigorous theoretical analysis of meta-reweighting under label noise and show that its training trajectory unfolds in three phases: (i) an alignment phase that amplifies examples consistent with a clean subset and suppresses conflicting ones; (ii) a filtering phase driving noisy example weights toward zero until the clean subset loss plateaus; and (iii) a post-filtering phase in which noise filtration becomes perturbation-sensitive. The mechanism is a similarity-weighted coupling between training and clean subset signals together with clean subset training loss contraction; in the post-filtering regime where the clean-subset loss is sufficiently small, the coupling term vanishes and meta-reweighting loses discriminatory power. Guided by this analysis, we propose a lightweight surrogate for meta-reweighting that integrates mean-centering, row shifting, and label-signed modulation, yielding more stable performance while avoiding expensive bi-level optimization. Across synthetic and real noisy-label benchmarks, our method consistently outperforms strong reweighting/selection baselines.

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)