Score: 1

Uncovering Representation Bias for Investment Decisions in Open-Source Large Language Models

Published: October 7, 2025 | arXiv ID: 2510.05702v1

By: Fabrizio Dimino , Krati Saxena , Bhaskarjit Sarmah and more

Potential Business Impact:

AI learns money biases, needs fixing for fairness.

Business Areas:
Simulation Software

Large Language Models are increasingly adopted in financial applications to support investment workflows. However, prior studies have seldom examined how these models reflect biases related to firm size, sector, or financial characteristics, which can significantly impact decision-making. This paper addresses this gap by focusing on representation bias in open-source Qwen models. We propose a balanced round-robin prompting method over approximately 150 U.S. equities, applying constrained decoding and token-logit aggregation to derive firm-level confidence scores across financial contexts. Using statistical tests and variance analysis, we find that firm size and valuation consistently increase model confidence, while risk factors tend to decrease it. Confidence varies significantly across sectors, with the Technology sector showing the greatest variability. When models are prompted for specific financial categories, their confidence rankings best align with fundamental data, moderately with technical signals, and least with growth indicators. These results highlight representation bias in Qwen models and motivate sector-aware calibration and category-conditioned evaluation protocols for safe and fair financial LLM deployment.

Repos / Data Links

Page Count
12 pages

Category
Quantitative Finance:
Computational Finance