On the Performance of LLMs for Real Estate Appraisal
By: Margot Geerts , Manon Reusens , Bart Baesens and more
Potential Business Impact:
Helps people guess house prices fairly.
The real estate market is vital to global economies but suffers from significant information asymmetry. This study examines how Large Language Models (LLMs) can democratize access to real estate insights by generating competitive and interpretable house price estimates through optimized In-Context Learning (ICL) strategies. We systematically evaluate leading LLMs on diverse international housing datasets, comparing zero-shot, few-shot, market report-enhanced, and hybrid prompting techniques. Our results show that LLMs effectively leverage hedonic variables, such as property size and amenities, to produce meaningful estimates. While traditional machine learning models remain strong for pure predictive accuracy, LLMs offer a more accessible, interactive and interpretable alternative. Although self-explanations require cautious interpretation, we find that LLMs explain their predictions in agreement with state-of-the-art models, confirming their trustworthiness. Carefully selected in-context examples based on feature similarity and geographic proximity, significantly enhance LLM performance, yet LLMs struggle with overconfidence in price intervals and limited spatial reasoning. We offer practical guidance for structured prediction tasks through prompt optimization. Our findings highlight LLMs' potential to improve transparency in real estate appraisal and provide actionable insights for stakeholders.
Similar Papers
Evaluating Large Language Model Capabilities in Assessing Spatial Econometrics Research
Computers and Society
AI checks if science papers make economic sense.
Evaluating LLMs for Visualization Tasks
Software Engineering
Makes computers draw pictures from words.
LLM-Evaluation Tropes: Perspectives on the Validity of LLM-Evaluations
Information Retrieval
AI judges might trick us into thinking systems are good.