Beyond Correctness: Evaluating Subjective Writing Preferences Across Cultures
By: Shuangshuang Ying , Yunwen Li , Xingwei Qu and more
Potential Business Impact:
Helps computers judge writing quality better.
Current preference learning methods achieve high accuracy on standard benchmarks but exhibit significant performance degradation when objective quality signals are removed. We introduce WritingPreferenceBench, a dataset of 1,800 human-annotated preference pairs (1,200 English, 600 Chinese) across 8 creative writing genres, where responses are matched for objective correctness, factual accuracy, and length. On this benchmark, sequence-based reward models--the standard architecture for RLHF--achieve only 52.7% mean accuracy, while zero-shot language model judges perform at 53.9%. In contrast, generative reward models that produce explicit reasoning chains achieve 81.8% accuracy. We observe high within-model variance across genres: individual models range from 18.2% to 81.8% accuracy across different writing categories, with standard deviations averaging 10.1%. This variance persists regardless of model scale, with 27B parameter models showing no consistent improvement over 8B variants. Our results suggest that current RLHF methods primarily learn to detect objective errors rather than capture subjective quality preferences (e.g., creativity, stylistic flair, and emotional resonance), and that successful preference modeling may require intermediate reasoning representations rather than direct classification.
Similar Papers
LiteraryTaste: A Preference Dataset for Creative Writing Personalization
Computation and Language
Teaches computers to write stories people like.
RLMR: Reinforcement Learning with Mixed Rewards for Creative Writing
Artificial Intelligence
Helps AI write stories that are good and follow rules.
RLMR: Reinforcement Learning with Mixed Rewards for Creative Writing
Artificial Intelligence
Makes stories follow rules and be good.