Score: 1

EssayBench: Evaluating Large Language Models in Multi-Genre Chinese Essay Writing

Published: June 3, 2025 | arXiv ID: 2506.02596v1

By: Fan Gao , Dongyuan Li , Ding Xia and more

BigTech Affiliations: Huawei

Potential Business Impact:

Tests computers writing Chinese essays well.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Chinese essay writing and its evaluation are critical in educational contexts, yet the capabilities of Large Language Models (LLMs) in this domain remain largely underexplored. Existing benchmarks often rely on coarse-grained text quality metrics, largely overlooking the structural and rhetorical complexities of Chinese essays, particularly across diverse genres. To address this gap, we propose \benchName, a multi-genre benchmark specifically designed for Chinese essay writing across four major genres: Argumentative, Narrative, Descriptive, and Expository. We curate and refine a total of 728 real-world prompts to ensure authenticity and meticulously categorize them into the \textit{Open-Ended} and \textit{Constrained} sets to capture diverse writing scenarios. To reliably evaluate generated essays, we develop a fine-grained, genre-specific scoring framework that hierarchically aggregates scores. We further validate our evaluation protocol through a comprehensive human agreement study. Finally, we benchmark 15 large-sized LLMs, analyzing their strengths and limitations across genres and instruction types. With \benchName, we aim to advance LLM-based Chinese essay evaluation and inspire future research on improving essay generation in educational settings.

Country of Origin
🇨🇳 China

Page Count
23 pages

Category
Computer Science:
Computation and Language