Towards Explainable and Reliable AI in Finance
By: Albi Isufaj, Pablo Mollá, Helmut Prendinger
Potential Business Impact:
Makes money predictions trustworthy and understandable.
Financial forecasting increasingly uses large neural network models, but their opacity raises challenges for trust and regulatory compliance. We present several approaches to explainable and reliable AI in finance. \emph{First}, we describe how Time-LLM, a time series foundation model, uses a prompt to avoid a wrong directional forecast. \emph{Second}, we show that combining foundation models for time series forecasting with a reliability estimator can filter our unreliable predictions. \emph{Third}, we argue for symbolic reasoning encoding domain rules for transparent justification. These approaches shift emphasize executing only forecasts that are both reliable and explainable. Experiments on equity and cryptocurrency data show that the architecture reduces false positives and supports selective execution. By integrating predictive performance with reliability estimation and rule-based reasoning, our framework advances transparent and auditable financial AI systems.
Similar Papers
Explainable-AI powered stock price prediction using time series transformers: A Case Study on BIST100
Statistical Finance
Helps people understand stock prices to invest better.
Explaining the Unexplainable: A Systematic Review of Explainable AI in Finance
General Finance
Helps people understand how money computers make decisions.
On Identifying Why and When Foundation Models Perform Well on Time-Series Forecasting Using Automated Explanations and Rating
Machine Learning (CS)
Shows when computer predictions are good or bad.