Score: 0

Exploring the In-Context Learning Capabilities of LLMs for Money Laundering Detection in Financial Graphs

Published: July 20, 2025 | arXiv ID: 2507.14785v1

By: Erfan Pirmorad

Potential Business Impact:

Helps catch money launderers by reading financial clues.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The complexity and interconnectivity of entities involved in money laundering demand investigative reasoning over graph-structured data. This paper explores the use of large language models (LLMs) as reasoning engines over localized subgraphs extracted from a financial knowledge graph. We propose a lightweight pipeline that retrieves k-hop neighborhoods around entities of interest, serializes them into structured text, and prompts an LLM via few-shot in-context learning to assess suspiciousness and generate justifications. Using synthetic anti-money laundering (AML) scenarios that reflect common laundering behaviors, we show that LLMs can emulate analyst-style logic, highlight red flags, and provide coherent explanations. While this study is exploratory, it illustrates the potential of LLM-based graph reasoning in AML and lays groundwork for explainable, language-driven financial crime analytics.

Page Count
4 pages

Category
Computer Science:
Machine Learning (CS)