Score: 0

A Test of Lookahead Bias in LLM Forecasts

Published: December 29, 2025 | arXiv ID: 2512.23847v1

By: Zhenyu Gao, Wenxi Jiang, Yutong Yan

We develop a statistical test to detect lookahead bias in economic forecasts generated by large language models (LLMs). Using state-of-the-art pre-training data detection techniques, we estimate the likelihood that a given prompt appeared in an LLM's training corpus, a statistic we term Lookahead Propensity (LAP). We formally show that a positive correlation between LAP and forecast accuracy indicates the presence and magnitude of lookahead bias, and apply the test to two forecasting tasks: news headlines predicting stock returns and earnings call transcripts predicting capital expenditures. Our test provides a cost-efficient, diagnostic tool for assessing the validity and reliability of LLM-generated forecasts.

Category
Quantitative Finance:
General Finance