Reliability-Aware Adaptive Self-Consistency for Efficient Sampling in LLM Reasoning
By: Junseok Kim , Nakyeong Yang , Kyungmin Min and more
Potential Business Impact:
Makes AI smarter by using less computer power.
Self-Consistency improves reasoning reliability through multi-sample aggregation, but incurs substantial inference cost. Adaptive self-consistency methods mitigate this issue by adjusting the sampling budget; however, they rely on count-based stopping rules that treat all responses equally, often leading to unnecessary sampling. We propose Reliability-Aware Adaptive Self-Consistency (ReASC), which addresses this limitation by reframing adaptive sampling from response counting to evidence sufficiency, leveraging response-level confidence for principled information aggregation. ReASC operates in two stages: a single-sample decision stage that resolves instances confidently answerable from a single response, and a reliability-aware accumulation stage that aggregates responses by jointly leveraging their frequency and confidence. Across five models and four datasets, ReASC consistently achieves the best accuracy-cost trade-off compared to existing baselines, yielding improved inference efficiency across model scales from 3B to 27B parameters. As a concrete example, ReASC reduces inference cost by up to 70\% relative to self-consistency while preserving accuracy on GSM8K using Gemma-3-4B-it.
Similar Papers
Optimal Self-Consistency for Efficient Reasoning with Large Language Models
Machine Learning (CS)
Makes AI smarter with fewer guesses.
Reevaluating Self-Consistency Scaling in Multi-Agent Systems
Artificial Intelligence
Makes AI smarter, but not much more.
Confidence Improves Self-Consistency in LLMs
Computation and Language
Helps AI think better, faster, and more reliably.