Score: 0

Understanding Structured Financial Data with LLMs: A Case Study on Fraud Detection

Published: December 15, 2025 | arXiv ID: 2512.13040v1

By: Xuwei Tan, Yao Ma, Xueru Zhang

Potential Business Impact:

Helps find fake money with easy-to-read reasons.

Business Areas:
Fraud Detection Financial Services, Payments, Privacy and Security

Detecting fraud in financial transactions typically relies on tabular models that demand heavy feature engineering to handle high-dimensional data and offer limited interpretability, making it difficult for humans to understand predictions. Large Language Models (LLMs), in contrast, can produce human-readable explanations and facilitate feature analysis, potentially reducing the manual workload of fraud analysts and informing system refinements. However, they perform poorly when applied directly to tabular fraud detection due to the difficulty of reasoning over many features, the extreme class imbalance, and the absence of contextual information. To bridge this gap, we introduce FinFRE-RAG, a two-stage approach that applies importance-guided feature reduction to serialize a compact subset of numeric/categorical attributes into natural language and performs retrieval-augmented in-context learning over label-aware, instance-level exemplars. Across four public fraud datasets and three families of open-weight LLMs, FinFRE-RAG substantially improves F1/MCC over direct prompting and is competitive with strong tabular baselines in several settings. Although these LLMs still lag behind specialized classifiers, they narrow the performance gap and provide interpretable rationales, highlighting their value as assistive tools in fraud analysis.

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)