Optimal Self-Consistency for Efficient Reasoning with Large Language Models
By: Austin Feng, Marius Alonso, Ambroise Odonnat
Potential Business Impact:
Makes AI smarter with fewer guesses.
Self-consistency (SC) is a widely used test-time inference technique for improving performance in chain-of-thought reasoning. It involves generating multiple responses, or samples from a large language model (LLM) and selecting the most frequent answer. This procedure can naturally be viewed as a majority vote or empirical mode estimation. Despite its effectiveness, SC is prohibitively expensive at scale when naively applied to datasets, and it lacks a unified theoretical treatment of sample efficiency and scaling behavior. In this paper, we provide the first comprehensive analysis of SC's scaling behavior and its variants, drawing on mode estimation and voting theory. We derive and empirically validate power law scaling for self-consistency across datasets, and analyze the sample efficiency for fixed-allocation and dynamic-allocation sampling schemes. From these insights, we introduce Blend-ASC, a novel variant of self-consistency that dynamically allocates samples to questions during inference, achieving state-of-the-art sample efficiency. Our approach uses 6.8x fewer samples than vanilla SC on average, outperforming both fixed- and dynamic-allocation SC baselines, thereby demonstrating the superiority of our approach in terms of efficiency. In contrast to existing variants, Blend-ASC is hyperparameter-free and can fit an arbitrary sample budget, ensuring it can be easily applied to any self-consistency application.
Similar Papers
Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning
Computation and Language
Makes AI answers more reliable and trustworthy.
Slim-SC: Thought Pruning for Efficient Scaling with Self-Consistency
Computation and Language
Makes smart computer answers faster and cheaper.
Reevaluating Self-Consistency Scaling in Multi-Agent Systems
Artificial Intelligence
Makes AI smarter, but not much more.