Self-ensemble: Mitigating Confidence Distortion for Large Language Models
By: Zicheng Xu , Guanchu Wang , Guangyao Zheng and more
Potential Business Impact:
Makes AI better at picking the right answer.
Although Large Language Models (LLMs) perform well in general fields, they exhibit a confidence distortion problem on multi-choice question-answering (MCQA), particularly as the number of answer choices increases. Specifically, on MCQA with many choices, LLMs suffer from under-confidence in correct predictions and over-confidence in incorrect ones, leading to a substantially degraded performance. To solve this problem, we propose Self-ensemble in this work. Our method splits the choices into several groups and ensembles LLM predictions across these groups to reach a final decision. The advantage of Self-ensemble is its plug-and-play nature, where it can be integrated into existing LLM architecture based on a designed attention mask and positional encoding, without requiring labeled datasets for parameter tuning. Experimental results on three LLMs and datasets demonstrate that Self-ensemble comprehensively addresses the confidence distortion problem of LLMs, outperforming standard inference as well as baseline methods.
Similar Papers
Ensemble of Large Language Models for Curated Labeling and Rating of Free-text Data
Computation and Language
Helps computers understand people's written thoughts faster.
Confidence-Credibility Aware Weighted Ensembles of Small LLMs Outperform Large LLMs in Emotion Detection
Computation and Language
Makes computers understand feelings in writing better.
Labeling Free-text Data using Language Model Ensembles
Computation and Language
Helps computers understand people's written thoughts faster.