Score: 0

Reliability-Aware Adaptive Self-Consistency for Efficient Sampling in LLM Reasoning

Published: January 6, 2026 | arXiv ID: 2601.02970v1

By: Junseok Kim , Nakyeong Yang , Kyungmin Min and more

Potential Business Impact:

Makes AI smarter by using less computer power.

Business Areas:
A/B Testing Data and Analytics

Self-Consistency improves reasoning reliability through multi-sample aggregation, but incurs substantial inference cost. Adaptive self-consistency methods mitigate this issue by adjusting the sampling budget; however, they rely on count-based stopping rules that treat all responses equally, often leading to unnecessary sampling. We propose Reliability-Aware Adaptive Self-Consistency (ReASC), which addresses this limitation by reframing adaptive sampling from response counting to evidence sufficiency, leveraging response-level confidence for principled information aggregation. ReASC operates in two stages: a single-sample decision stage that resolves instances confidently answerable from a single response, and a reliability-aware accumulation stage that aggregates responses by jointly leveraging their frequency and confidence. Across five models and four datasets, ReASC consistently achieves the best accuracy-cost trade-off compared to existing baselines, yielding improved inference efficiency across model scales from 3B to 27B parameters. As a concrete example, ReASC reduces inference cost by up to 70\% relative to self-consistency while preserving accuracy on GSM8K using Gemma-3-4B-it.

Country of Origin
πŸ‡°πŸ‡· Korea, Republic of

Page Count
15 pages

Category
Computer Science:
Computation and Language