Understanding Structured Financial Data with LLMs: A Case Study on Fraud Detection
By: Xuwei Tan, Yao Ma, Xueru Zhang
Potential Business Impact:
Helps find fake money with easy-to-read reasons.
Detecting fraud in financial transactions typically relies on tabular models that demand heavy feature engineering to handle high-dimensional data and offer limited interpretability, making it difficult for humans to understand predictions. Large Language Models (LLMs), in contrast, can produce human-readable explanations and facilitate feature analysis, potentially reducing the manual workload of fraud analysts and informing system refinements. However, they perform poorly when applied directly to tabular fraud detection due to the difficulty of reasoning over many features, the extreme class imbalance, and the absence of contextual information. To bridge this gap, we introduce FinFRE-RAG, a two-stage approach that applies importance-guided feature reduction to serialize a compact subset of numeric/categorical attributes into natural language and performs retrieval-augmented in-context learning over label-aware, instance-level exemplars. Across four public fraud datasets and three families of open-weight LLMs, FinFRE-RAG substantially improves F1/MCC over direct prompting and is competitive with strong tabular baselines in several settings. Although these LLMs still lag behind specialized classifiers, they narrow the performance gap and provide interpretable rationales, highlighting their value as assistive tools in fraud analysis.
Similar Papers
Measuring What LLMs Think They Do: SHAP Faithfulness and Deployability on Financial Tabular Classification
Machine Learning (CS)
Makes AI explain financial risks more honestly.
Information Extraction From Fiscal Documents Using LLMs
Computation and Language
Lets computers understand government money reports.
Interpreting LLMs as Credit Risk Classifiers: Do Their Feature Explanations Align with Classical ML?
Computation and Language
Helps banks predict loan defaults better.