Can Large Language Models Differentiate Harmful from Argumentative Essays? Steps Toward Ethical Essay Scoring
By: Hongjin Kim, Jeonghyun Kang, Harksoo Kim
Potential Business Impact:
Teaches computers to spot and score bad writing.
This study addresses critical gaps in Automated Essay Scoring (AES) systems and Large Language Models (LLMs) with regard to their ability to effectively identify and score harmful essays. Despite advancements in AES technology, current models often overlook ethically and morally problematic elements within essays, erroneously assigning high scores to essays that may propagate harmful opinions. In this study, we introduce the Harmful Essay Detection (HED) benchmark, which includes essays integrating sensitive topics such as racism and gender bias, to test the efficacy of various LLMs in recognizing and scoring harmful content. Our findings reveal that: (1) LLMs require further enhancement to accurately distinguish between harmful and argumentative essays, and (2) both current AES models and LLMs fail to consider the ethical dimensions of content during scoring. The study underscores the need for developing more robust AES systems that are sensitive to the ethical implications of the content they are scoring.
Similar Papers
Assessing LLM Text Detection in Educational Contexts: Does Human Contribution Affect Detection?
Computation and Language
Finds if students used AI to write essays.
Agreement Between Large Language Models and Human Raters in Essay Scoring: A Research Synthesis
Computation and Language
Helps computers grade essays as well as people.
EssayJudge: A Multi-Granular Benchmark for Assessing Automated Essay Scoring Capabilities of Multimodal Large Language Models
Computation and Language
Helps computers grade essays better, even with pictures.