Difficulty-Controllable Multiple-Choice Question Generation Using Large Language Models and Direct Preference Optimization
By: Yuto Tomikawa, Masaki Uto
Potential Business Impact:
Makes learning tests harder or easier on purpose.
Difficulty-controllable question generation for reading comprehension has gained significant attention in the field of education as a fundamental tool for adaptive learning support. Although several neural question generation methods have recently succeeded in controlling difficulty, conventional approaches still face two major limitations. First, they cannot directly generate multiple-choice questions, which are the most widely used question type in educational contexts. Second, they are not explicitly trained to optimize the accuracy of difficulty control, leaving room for further improvement in difficulty controllability. To address these limitations, this study proposes a novel difficulty-controllable multiple-choice question generation method for reading comprehension which leverages a large language model trained using a direct preference optimization technique to improve the accuracy of difficulty control.
Similar Papers
Difficulty-Controllable Cloze Question Distractor Generation
Computation and Language
Creates harder word puzzles for language tests.
Advancing Question Generation with Joint Narrative and Difficulty Control
Computation and Language
Makes learning questions harder or easier.
Self-Correcting Large Language Models: Generation vs. Multiple Choice
Computation and Language
Helps computers fix their own mistakes better.