Generate-Then-Validate: A Novel Question Generation Approach Using Small Language Models
By: Yumou Wei, John Stamper, Paulo F. Carvalho
Potential Business Impact:
Creates smart questions for learning from texts.
We explore the use of small language models (SLMs) for automatic question generation as a complement to the prevalent use of their large counterparts in learning analytics research. We present a novel question generation pipeline that leverages both the text generation and the probabilistic reasoning abilities of SLMs to generate high-quality questions. Adopting a "generate-then-validate" strategy, our pipeline first performs expansive generation to create an abundance of candidate questions and refine them through selective validation based on novel probabilistic reasoning. We conducted two evaluation studies, one with seven human experts and the other with a large language model (LLM), to assess the quality of the generated questions. Most judges (humans or LLMs) agreed that the generated questions had clear answers and generally aligned well with the intended learning objectives. Our findings suggest that an SLM can effectively generate high-quality questions when guided by a well-designed pipeline that leverages its strengths.
Similar Papers
Towards Small Language Models for Security Query Generation in SOC Workflows
Cryptography and Security
Lets security guards ask computers questions in plain English.
Automatic Question & Answer Generation Using Generative Large Language Model (LLM)
Computation and Language
Creates test questions from books automatically.
Do small language models generate realistic variable-quality fake news headlines?
Computation and Language
Makes fake news headlines harder to spot.