QSTN: A Modular Framework for Robust Questionnaire Inference with Large Language Models
By: Maximilian Kreutner , Jens Rupprecht , Georg Ahnert and more
Potential Business Impact:
Lets computers answer survey questions like people.
We introduce QSTN, an open-source Python framework for systematically generating responses from questionnaire-style prompts to support in-silico surveys and annotation tasks with large language models (LLMs). QSTN enables robust evaluation of questionnaire presentation, prompt perturbations, and response generation methods. Our extensive evaluation ($>40 $ million survey responses) shows that question structure and response generation methods have a significant impact on the alignment of generated survey responses with human answers, and can be obtained for a fraction of the compute cost. In addition, we offer a no-code user interface that allows researchers to set up robust experiments with LLMs without coding knowledge. We hope that QSTN will support the reproducibility and reliability of LLM-based research in the future.
Similar Papers
Questionnaire meets LLM: A Benchmark and Empirical Study of Structural Skills for Understanding Questions and Responses
Artificial Intelligence
Helps computers understand survey answers better.
Methodological Foundations for AI-Driven Survey Question Generation
Computers and Society
Makes smart computer questions for school surveys.
QueST: Incentivizing LLMs to Generate Difficult Problems
Computation and Language
Makes computers better at solving hard math and code problems.