Score: 1

MANTRA: a Framework for Multi-stage Adaptive Noise TReAtment During Training

Published: December 3, 2025 | arXiv ID: 2512.04319v1

By: Zixiao Zhao, Fatemeh H. Fard, Jie JW Wu

Potential Business Impact:

Cleans messy computer code data for better AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The reliable application of deep learning models to software engineering tasks hinges on high-quality training data. Yet, large-scale repositories inevitably introduce noisy or mislabeled examples that degrade both accuracy and robustness. While Noise Label Learning (NLL) has been extensively studied in other fields, there are a few works that investigate NLL in Software Engineering (SE) and Large Language Models (LLMs) for SE tasks. In this work, we propose MANTRA, a Multi-stage Adaptive Noise TReAtment framework that embeds noise diagnosis and mitigation directly into the fine-tuning process of code-Pretrained Language Models (PTM) and code-LLMs. We first investigate the effect of noise at varying levels on convergence and loss trajectories of the models. Then we apply an adaptive dropout strategy guided by per-sample loss dynamics and Gaussian Mixture Model clustering to exclude persistently noisy points while preserving clean data. Applying to code summarization and commit intent classification, our experiments reveal that some LLMs are more sensitive to noise than others. However, with MANTRA, the performance of all models in both tasks is improved. MANTRA enables researchers and practitioners to reduce the impact of errors introduced by the dataset in training, saves time in data cleaning and processing, while maximizing the effect of fine-tuning.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡¦ United States, Canada

Page Count
21 pages

Category
Computer Science:
Software Engineering