Score: 1

Self-ensemble: Mitigating Confidence Distortion for Large Language Models

Published: June 2, 2025 | arXiv ID: 2506.01951v1

By: Zicheng Xu , Guanchu Wang , Guangyao Zheng and more

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Makes AI better at picking the right answer.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Although Large Language Models (LLMs) perform well in general fields, they exhibit a confidence distortion problem on multi-choice question-answering (MCQA), particularly as the number of answer choices increases. Specifically, on MCQA with many choices, LLMs suffer from under-confidence in correct predictions and over-confidence in incorrect ones, leading to a substantially degraded performance. To solve this problem, we propose Self-ensemble in this work. Our method splits the choices into several groups and ensembles LLM predictions across these groups to reach a final decision. The advantage of Self-ensemble is its plug-and-play nature, where it can be integrated into existing LLM architecture based on a designed attention mask and positional encoding, without requiring labeled datasets for parameter tuning. Experimental results on three LLMs and datasets demonstrate that Self-ensemble comprehensively addresses the confidence distortion problem of LLMs, outperforming standard inference as well as baseline methods.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Computation and Language