Majority of the Bests: Improving Best-of-N via Bootstrapping
By: Amin Rakhsha , Kanika Madan , Tianyu Zhang and more
Potential Business Impact:
Finds better answers by picking the most common choice.
Sampling multiple outputs from a Large Language Model (LLM) and selecting the most frequent (Self-consistency) or highest-scoring (Best-of-N) candidate is a popular approach to achieve higher accuracy in tasks with discrete final answers. Best-of-N (BoN) selects the output with the highest reward, and with perfect rewards, it often achieves near-perfect accuracy. With imperfect rewards from reward models, however, BoN fails to reliably find the correct answer and its performance degrades drastically. We consider the distribution of BoN's outputs and highlight that, although the correct answer does not usually have a probability close to one under imperfect rewards, it is often the most likely outcome. This suggests that the mode of this distribution can be more reliably correct than a sample from it. Based on this idea, we propose Majority-of-the-Bests (MoB), a novel selection mechanism that estimates the output distribution of BoN via bootstrapping and selects its mode. Experimental results across five benchmarks, three different base LLMs, and two reward models demonstrate consistent improvements over BoN in 25 out of 30 setups. We also provide theoretical results for the consistency of the bootstrapping. MoB serves as a simple, yet strong alternative to BoN and self-consistency, and more broadly, motivates further research in more nuanced selection mechanisms.
Similar Papers
Best-of-Majority: Minimax-Optimal Strategy for Pass@$k$ Inference Scaling
Machine Learning (CS)
Helps AI pick the best answer from many tries.
Soft Best-of-n Sampling for Model Alignment
Information Theory
Makes AI answers better by picking the best ones.
Best of mini-N in-loop Sampling: A Contextual Quality Reward Model for Reliable and Efficient Best-of-N Sampling
Methodology
Helps computers know when answers are good enough.