Score: 0

The Effects of Data Augmentation on Confidence Estimation for LLMs

Published: May 21, 2025 | arXiv ID: 2506.11046v1

By: Rui Wang , Renyu Zhu , Minmin Lin and more

Potential Business Impact:

Makes AI more honest about what it knows.

Business Areas:
A/B Testing Data and Analytics

Confidence estimation is crucial for reflecting the reliability of large language models (LLMs), particularly in the widely used closed-source models. Utilizing data augmentation for confidence estimation is viable, but discussions focus on specific augmentation techniques, limiting its potential. We study the impact of different data augmentation methods on confidence estimation. Our findings indicate that data augmentation strategies can achieve better performance and mitigate the impact of overconfidence. We investigate the influential factors related to this and discover that, while preserving semantic information, greater data diversity enhances the effectiveness of augmentation. Furthermore, the impact of different augmentation strategies varies across different range of application. Considering parameter transferability and usability, the random combination of augmentations is a promising choice.

Country of Origin
🇨🇳 China

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)