Aligned Textual Scoring Rules
By: Yuxuan Lu , Yifan Wu , Jason Hartline and more
Potential Business Impact:
Makes AI understand what people like in writing.
Scoring rules elicit probabilistic predictions from a strategic agent by scoring the prediction against a ground truth state. A scoring rule is proper if, from the agent's perspective, reporting the true belief maximizes the expected score. With the development of language models, Wu and Hartline (2024) proposes a reduction from textual information elicitation to the numerical (i.e. probabilistic) information elicitation problem, which achieves provable properness for textual elicitation. However, not all proper scoring rules are well aligned with human preference over text. Our paper designs the Aligned Scoring rule (ASR) for text by optimizing and minimizing the mean squared error between a proper scoring rule and a reference score (e.g. human score). Our experiments show that our ASR outperforms previous methods in aligning with human preference while maintaining properness.
Similar Papers
Proper scoring rules for estimation and forecast evaluation
Statistics Theory
Helps computers guess better and learn more.
A Novel Framework for Uncertainty Quantification via Proper Scores for Classification and Beyond
Machine Learning (CS)
Makes AI more sure about its answers.
Truthful Elicitation of Imprecise Forecasts
Machine Learning (CS)
Helps experts share uncertain guesses better.