Exploring the In-Context Learning Capabilities of LLMs for Money Laundering Detection in Financial Graphs
By: Erfan Pirmorad
Potential Business Impact:
Helps catch money launderers by reading financial clues.
The complexity and interconnectivity of entities involved in money laundering demand investigative reasoning over graph-structured data. This paper explores the use of large language models (LLMs) as reasoning engines over localized subgraphs extracted from a financial knowledge graph. We propose a lightweight pipeline that retrieves k-hop neighborhoods around entities of interest, serializes them into structured text, and prompts an LLM via few-shot in-context learning to assess suspiciousness and generate justifications. Using synthetic anti-money laundering (AML) scenarios that reflect common laundering behaviors, we show that LLMs can emulate analyst-style logic, highlight red flags, and provide coherent explanations. While this study is exploratory, it illustrates the potential of LLM-based graph reasoning in AML and lays groundwork for explainable, language-driven financial crime analytics.
Similar Papers
A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges
General Finance
Helps money computers understand and predict markets.
Actions Speak Louder than Prompts: A Large-Scale Study of LLMs for Graph Inference
Computation and Language
Computers learn from connected information better.
Large Language Models for Cryptocurrency Transaction Analysis: A Bitcoin Case Study
Cryptography and Security
Helps find bad guys in Bitcoin money.