Annotating Errors in English Learners' Written Language Production: Advancing Automated Written Feedback Systems
By: Steven Coyne , Diana Galvan-Sosa , Ryan Spring and more
Potential Business Impact:
Helps students learn why they make writing mistakes.
Recent advances in natural language processing (NLP) have contributed to the development of automated writing evaluation (AWE) systems that can correct grammatical errors. However, while these systems are effective at improving text, they are not optimally designed for language learning. They favor direct revisions, often with a click-to-fix functionality that can be applied without considering the reason for the correction. Meanwhile, depending on the error type, learners may benefit most from simple explanations and strategically indirect hints, especially on generalizable grammatical rules. To support the generation of such feedback, we introduce an annotation framework that models each error's error type and generalizability. For error type classification, we introduce a typology focused on inferring learners' knowledge gaps by connecting their errors to specific grammatical patterns. Following this framework, we collect a dataset of annotated learner errors and corresponding human-written feedback comments, each labeled as a direct correction or hint. With this data, we evaluate keyword-guided, keyword-free, and template-guided methods of generating feedback using large language models (LLMs). Human teachers examined each system's outputs, assessing them on grounds including relevance, factuality, and comprehensibility. We report on the development of the dataset and the comparative performance of the systems investigated.
Similar Papers
A Taxonomy of Errors in English as she is spoke: Toward an AI-Based Method of Error Analysis for EFL Writing Instruction
Computation and Language
AI finds and fixes writing mistakes in English.
FEANEL: A Benchmark for Fine-Grained Error Analysis in K-12 English Writing
Computation and Language
Helps AI grade student writing more accurately.
Humanizing Automated Programming Feedback: Fine-Tuning Generative Models with Student-Written Feedback
Computers and Society
Teaches computers to give better coding help.