FEANEL: A Benchmark for Fine-Grained Error Analysis in K-12 English Writing
By: Jingheng Ye , Shen Wang , Jiaqi Chen and more
Potential Business Impact:
Helps AI grade student writing more accurately.
Large Language Models (LLMs) have transformed artificial intelligence, offering profound opportunities for educational applications. However, their ability to provide fine-grained educational feedback for K-12 English writing remains underexplored. In this paper, we challenge the error analysis and pedagogical skills of LLMs by introducing the problem of Fine-grained Error Analysis for English Learners and present the Fine-grained Error ANalysis for English Learners (FEANEL) Benchmark. The benchmark comprises 1,000 essays written by elementary and secondary school students, and a well-developed English writing error taxonomy. Each error is annotated by language education experts and categorized by type, severity, and explanatory feedback, using a part-of-speech-based taxonomy they co-developed. We evaluate state-of-the-art LLMs on the FEANEL Benchmark to explore their error analysis and pedagogical abilities. Experimental results reveal significant gaps in current LLMs' ability to perform fine-grained error analysis, highlighting the need for advancements in particular methods for educational applications.
Similar Papers
FLAWS: A Benchmark for Error Identification and Localization in Scientific Papers
Computation and Language
Helps computers find mistakes in science papers.
Annotating Errors in English Learners' Written Language Production: Advancing Automated Written Feedback Systems
Computation and Language
Helps students learn why they make writing mistakes.
FActBench: A Benchmark for Fine-grained Automatic Evaluation of LLM-Generated Text in the Medical Domain
Computation and Language
Checks if AI gives correct medical advice.