Your AI, Not Your View: The Bias of LLMs in Investment Analysis
By: Hoyoung Lee , Junhyuk Seo , Suhwan Park and more
Potential Business Impact:
Helps AI make better stock picks, avoiding bad habits.
In finance, Large Language Models (LLMs) face frequent knowledge conflicts arising from discrepancies between their pre-trained parametric knowledge and real-time market data. These conflicts are especially problematic in real-world investment services, where a model's inherent biases can misalign with institutional objectives, leading to unreliable recommendations. Despite this risk, the intrinsic investment biases of LLMs remain underexplored. We propose an experimental framework to investigate emergent behaviors in such conflict scenarios, offering a quantitative analysis of bias in LLM-based investment analysis. Using hypothetical scenarios with balanced and imbalanced arguments, we extract the latent biases of models and measure their persistence. Our analysis, centered on sector, size, and momentum, reveals distinct, model-specific biases. Across most models, a tendency to prefer technology stocks, large-cap stocks, and contrarian strategies is observed. These foundational biases often escalate into confirmation bias, causing models to cling to initial judgments even when faced with increasing counter-evidence. A public leaderboard benchmarking bias across a broader set of models is available at https://linqalpha.com/leaderboard
Similar Papers
Your AI, Not Your View: The Bias of LLMs in Investment Analysis
Portfolio Management
Finds hidden money biases in smart computer programs.
Exposing Product Bias in LLM Investment Recommendation
Computation and Language
AI picks favorites when suggesting investments.
Large Language Models Develop Novel Social Biases Through Adaptive Exploration
Computers and Society
Computers can invent new unfairness, not just copy it.