Human- vs. AI-generated tests: dimensionality and information accuracy in latent trait evaluation
By: Mario Angelelli , Morena Oliva , Serena Arima and more
Potential Business Impact:
AI makes surveys that measure feelings better.
Artificial Intelligence (AI) and large language models (LLMs) are increasingly used in social and psychological research. Among potential applications, LLMs can be used to generate, customise, or adapt measurement instruments. This study presents a preliminary investigation of AI-generated questionnaires by comparing two ChatGPT-based adaptations of the Body Awareness Questionnaire (BAQ) with the validated human-developed version. The AI instruments were designed with different levels of explicitness in content and instructions on construct facets, and their psychometric properties were assessed using a Bayesian Graded Response Model. Results show that although surface wording between AI and original items was similar, differences emerged in dimensionality and in the distribution of item and test information across latent traits. These findings illustrate the importance of applying statistical measures of accuracy to ensure the validity and interpretability of AI-driven tools.
Similar Papers
Assessing the Quality of AI-Generated Exams: A Large-Scale Field Study
Computers and Society
AI makes better tests for students and teachers.
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Computation and Language
Makes AI talk like people, but it's not quite there.
Evaluating LLM-Generated Q&A Test: a Student-Centered Study
Computation and Language
AI makes good tests for school subjects.