EpiCoDe: Boosting Model Performance Beyond Training with Extrapolation and Contrastive Decoding
By: Mingxu Tao , Jie Hu , Mingchuan Yang and more
Potential Business Impact:
Makes AI smarter with less training data.
The remarkable performance of Large language models (LLMs) relies heavily on the availability of abundant high-quality training data. However, the high cost of acquiring annotated data often prevents models from obtaining capabilities to tackle downstream tasks. In this paper, we introduce a novel method, EpiCoDe that boosts model performance in data-scarcity scenarios without extra training. We first employ model extrapolation to enhance a finetuned model with its inferior version, and then adopt contrastive decoding to further reduce predicted errors, by comparing the logit scores given by the extrapolated and the vanilla finetuned model. Experiments across three tasks over four different LLMs show that EpiCoDe consistently outperforms existing methods with significant and robust improvement. We also propose a new theoretical framework to reveal the mechanism behind contrastive decoding in data-scarcity scenarios, which further helps us better understand the effectiveness of EpiCoDe.
Similar Papers
Extrapolation Merging: Keep Improving With Extrapolation and Merging
Computation and Language
Improves AI without more computer power or data.
Personalized LLM Decoding via Contrasting Personal Preference
Computation and Language
Makes AI understand what you like best.
Contrastive Decoding for Synthetic Data Generation in Low-Resource Language Modeling
Computation and Language
Makes AI smarter by using fake text.