A Fast and Effective Solution to the Problem of Look-ahead Bias in LLMs
By: Humzah Merchant, Bradford Levy
Potential Business Impact:
Removes bad financial predictions from AI.
Applying LLMs to predictive tasks in finance is challenging due to look-ahead bias resulting from their training on long time-series data. This precludes the backtests typically employed in finance since retraining frontier models from scratch with a specific knowledge cutoff is prohibitive. In this paper, we introduce a fast, effective, and low-cost alternative. Our method guides generation at inference time by adjusting the logits of a large base model using a pair of smaller, specialized models -- one fine-tuned on information to be forgotten and another on information to be retained. We demonstrate that our method effectively removes both verbatim and semantic knowledge, corrects biases, and outperforms prior methods.
Similar Papers
Robustness is Important: Limitations of LLMs for Data Fitting
Machine Learning (CS)
Computers change answers when you change names.
How to Correctly Report LLM-as-a-Judge Evaluations
Machine Learning (CS)
Fixes computer judge mistakes for fairer tests.
LLMLagBench: Identifying Temporal Training Boundaries in Large Language Models
Computation and Language
Tests how up-to-date a computer's knowledge is.