Benchmarking Overton Pluralism in LLMs
By: Elinor Poole-Dayan , Jiayi Wu , Taylor Sorensen and more
Potential Business Impact:
Makes AI show more different opinions.
We introduce a novel framework for measuring Overton pluralism in LLMs--the extent to which diverse viewpoints are represented in model outputs. We (i) formalize Overton pluralism as a set coverage metric (OvertonScore), (ii) conduct a large-scale U.S.-representative human study (N = 1209; 60 questions; 8 LLMs), and (iii) develop an automated benchmark that closely reproduces human judgments. On average, models achieve OvertonScores of 0.35--0.41, with DeepSeek V3 performing best; yet all models remain far below the theoretical maximum of 1.0, revealing substantial headroom for improvement. Because repeated large-scale human studies are costly and slow, scalable evaluation tools are essential for model development. Hence, we propose an automated benchmark that achieves high rank correlation with human judgments ($ρ=0.88$), providing a practical proxy without replacing human assessment. By turning pluralistic alignment from a normative aim into a measurable benchmark, our work establishes a foundation for systematic progress toward more pluralistic LLMs.
Similar Papers
Operationalizing Pluralistic Values in Large Language Model Alignment Reveals Trade-offs in Safety, Inclusivity, and Model Behavior
Artificial Intelligence
Makes AI understand different people better.
VAL-Bench: Measuring Value Alignment in Language Models
Artificial Intelligence
Checks if AI has fair and steady opinions.
Evaluating AI Alignment in Eleven LLMs through Output-Based Analysis and Human Benchmarking
Artificial Intelligence
Shows what AI values most in its answers.