Score: 0

Latent Self-Consistency for Reliable Majority-Set Selection in Short- and Long-Answer Reasoning

Published: August 25, 2025 | arXiv ID: 2508.18395v1

By: Jeong-seok Oh, Jay-yoon Lee

Potential Business Impact:

Makes AI answers more reliable and trustworthy.

Business Areas:
Semantic Search Internet Services

Probabilistic decoding in Large Language Models (LLMs) often yields inconsistent outputs, particularly on complex or long-form questions. Self-Consistency (SC) mitigates this for short-form QA by majority voting over exact strings, whereas Universal Self-Consistency (USC) and Weighted Unigram Consistency Score (WUCS) extend to long-form responses but lose accuracy on short-form benchmarks. We introduce Latent Self-Consistency (LSC), which selects the most semantically consistent response using learnable token embeddings. A lightweight forward generation of summary tokens increases inference time by less than 1% and requires no changes to the model architecture. Across 6 short-form and 5 long-form reasoning benchmarks (e.g., MATH, MMLU, TruthfulQA), LSC surpasses SC, USC and WUCS on all short-form and long-form ones on average, while maintaining negligible computational overhead. These results position LSC as a practical consistency-selection method that works reliably across answer formats. Additionally, LSC provides well-calibrated confidence estimates, maintaining low Expected Calibration Error across both answer formats.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
25 pages

Category
Computer Science:
Computation and Language