Evaluating Large Language Model Capabilities in Assessing Spatial Econometrics Research
By: Giuseppe Arbia, Luca Morandini, Vincenzo Nardelli
Potential Business Impact:
AI checks if science papers make economic sense.
This paper investigates Large Language Models (LLMs) ability to assess the economic soundness and theoretical consistency of empirical findings in spatial econometrics. We created original and deliberately altered "counterfactual" summaries from 28 published papers (2005-2024), which were evaluated by a diverse set of LLMs. The LLMs provided qualitative assessments and structured binary classifications on variable choice, coefficient plausibility, and publication suitability. The results indicate that while LLMs can expertly assess the coherence of variable choices (with top models like GPT-4o achieving an overall F1 score of 0.87), their performance varies significantly when evaluating deeper aspects such as coefficient plausibility and overall publication suitability. The results further revealed that the choice of LLM, the specific characteristics of the paper and the interaction between these two factors significantly influence the accuracy of the assessment, particularly for nuanced judgments. These findings highlight LLMs' current strengths in assisting with initial, more surface-level checks and their limitations in performing comprehensive, deep economic reasoning, suggesting a potential assistive role in peer review that still necessitates robust human oversight.
Similar Papers
Can Large Language Models Integrate Spatial Data? Empirical Insights into Reasoning Strengths and Computational Weaknesses
Artificial Intelligence
Helps computers combine messy map data better.
On the Performance of LLMs for Real Estate Appraisal
Artificial Intelligence
Helps people guess house prices fairly.
Geospatial Mechanistic Interpretability of Large Language Models
Machine Learning (CS)
Shows how computers "see" maps and places.