Enhancing Security and Strengthening Defenses in Automated Short-Answer Grading Systems
By: Sahar Yarmohammadtoosky , Yiyun Zhou , Victoria Yaneva and more
Potential Business Impact:
Fixes AI that grades student answers unfairly.
This study examines vulnerabilities in transformer-based automated short-answer grading systems used in medical education, with a focus on how these systems can be manipulated through adversarial gaming strategies. Our research identifies three main types of gaming strategies that exploit the system's weaknesses, potentially leading to false positives. To counteract these vulnerabilities, we implement several adversarial training methods designed to enhance the systems' robustness. Our results indicate that these methods significantly reduce the susceptibility of grading systems to such manipulations, especially when combined with ensemble techniques like majority voting and ridge regression, which further improve the system's defense against sophisticated adversarial inputs. Additionally, employing large language models such as GPT-4 with varied prompting techniques has shown promise in recognizing and scoring gaming strategies effectively. The findings underscore the importance of continuous improvements in AI-driven educational tools to ensure their reliability and fairness in high-stakes settings.
Similar Papers
Adversarial Attacks on Reinforcement Learning-based Medical Questionnaire Systems: Input-level Perturbation Strategies and Medical Constraint Validation
Cryptography and Security
Makes AI doctors make wrong guesses on purpose.
Automated Red-Teaming Framework for Large Language Model Security Assessment: A Comprehensive Attack Generation and Detection System
Cryptography and Security
Finds hidden dangers in AI programs.
Focusing on Students, not Machines: Grounded Question Generation and Automated Answer Grading
Computation and Language
Makes homework and tests easier for teachers.