Persona-Augmented Benchmarking: Evaluating LLMs Across Diverse Writing Styles
By: Kimberly Le Truong , Riccardo Fogliato , Hoda Heidari and more
Potential Business Impact:
Makes AI understand different writing styles better.
Current benchmarks for evaluating Large Language Models (LLMs) often do not exhibit enough writing style diversity, with many adhering primarily to standardized conventions. Such benchmarks do not fully capture the rich variety of communication patterns exhibited by humans. Thus, it is possible that LLMs, which are optimized on these benchmarks, may demonstrate brittle performance when faced with "non-standard" input. In this work, we test this hypothesis by rewriting evaluation prompts using persona-based LLM prompting, a low-cost method to emulate diverse writing styles. Our results show that, even with identical semantic content, variations in writing style and prompt formatting significantly impact the estimated performance of the LLM under evaluation. Notably, we identify distinct writing styles that consistently trigger either low or high performance across a range of models and tasks, irrespective of model family, size, and recency. Our work offers a scalable approach to augment existing benchmarks, improving the external validity of the assessments they provide for measuring LLM performance across linguistic variations.
Similar Papers
Counterfactual LLM-based Framework for Measuring Rhetorical Style
Computation and Language
AI helps tell hype from real science news.
Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays
Computation and Language
AI can't write like real people applying to college.
WritingBench: A Comprehensive Benchmark for Generative Writing
Artificial Intelligence
Tests how well computers write different kinds of stories.