Uncovering Representation Bias for Investment Decisions in Open-Source Large Language Models
By: Fabrizio Dimino , Krati Saxena , Bhaskarjit Sarmah and more
Potential Business Impact:
Finds if AI favors big companies in money advice.
Large Language Models are increasingly adopted in financial applications to support investment workflows. However, prior studies have seldom examined how these models reflect biases related to firm size, sector, or financial characteristics, which can significantly impact decision-making. This paper addresses this gap by focusing on representation bias in open-source Qwen models. We propose a balanced round-robin prompting method over approximately 150 U.S. equities, applying constrained decoding and token-logit aggregation to derive firm-level confidence scores across financial contexts. Using statistical tests and variance analysis, we find that firm size and valuation consistently increase model confidence, while risk factors tend to decrease it. Confidence varies significantly across sectors, with the Technology sector showing the greatest variability. When models are prompted for specific financial categories, their confidence rankings best align with fundamental data, moderately with technical signals, and least with growth indicators. These results highlight representation bias in Qwen models and motivate sector-aware calibration and category-conditioned evaluation protocols for safe and fair financial LLM deployment.
Similar Papers
Uncovering Representation Bias for Investment Decisions in Open-Source Large Language Models
Computational Finance
AI learns money biases, needs fixing for fairness.
Tracing Positional Bias in Financial Decision-Making: Mechanistic Insights from Qwen2.5
Computational Finance
Finds hidden bias in money-making computer programs.
Tracing Positional Bias in Financial Decision-Making: Mechanistic Insights from Qwen2.5
Computational Finance
Finds and fixes hidden money-making mistakes in AI.