EssayBench: Evaluating Large Language Models in Multi-Genre Chinese Essay Writing
By: Fan Gao , Dongyuan Li , Ding Xia and more
Potential Business Impact:
Tests computers writing Chinese essays well.
Chinese essay writing and its evaluation are critical in educational contexts, yet the capabilities of Large Language Models (LLMs) in this domain remain largely underexplored. Existing benchmarks often rely on coarse-grained text quality metrics, largely overlooking the structural and rhetorical complexities of Chinese essays, particularly across diverse genres. To address this gap, we propose \benchName, a multi-genre benchmark specifically designed for Chinese essay writing across four major genres: Argumentative, Narrative, Descriptive, and Expository. We curate and refine a total of 728 real-world prompts to ensure authenticity and meticulously categorize them into the \textit{Open-Ended} and \textit{Constrained} sets to capture diverse writing scenarios. To reliably evaluate generated essays, we develop a fine-grained, genre-specific scoring framework that hierarchically aggregates scores. We further validate our evaluation protocol through a comprehensive human agreement study. Finally, we benchmark 15 large-sized LLMs, analyzing their strengths and limitations across genres and instruction types. With \benchName, we aim to advance LLM-based Chinese essay evaluation and inspire future research on improving essay generation in educational settings.
Similar Papers
WritingBench: A Comprehensive Benchmark for Generative Writing
Artificial Intelligence
Tests how well computers write different kinds of stories.
OmniEduBench: A Comprehensive Chinese Benchmark for Evaluating Large Language Models in Education
Computation and Language
Tests how well AI learns and thinks like students.
Capabilities and Evaluation Biases of Large Language Models in Classical Chinese Poetry Generation: A Case Study on Tang Poetry
Computation and Language
Computers write poems, but humans must check them.