Score: 1

When Tables Leak: Attacking String Memorization in LLM-Based Tabular Data Generation

Published: December 9, 2025 | arXiv ID: 2512.08875v1

By: Joshua Ward , Bochao Gu , Chi-Hua Wang and more

Potential Business Impact:

AI makes fake data that accidentally reveals real secrets.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) have recently demonstrated remarkable performance in generating high-quality tabular synthetic data. In practice, two primary approaches have emerged for adapting LLMs to tabular data generation: (i) fine-tuning smaller models directly on tabular datasets, and (ii) prompting larger models with examples provided in context. In this work, we show that popular implementations from both regimes exhibit a tendency to compromise privacy by reproducing memorized patterns of numeric digits from their training data. To systematically analyze this risk, we introduce a simple No-box Membership Inference Attack (MIA) called LevAtt that assumes adversarial access to only the generated synthetic data and targets the string sequences of numeric digits in synthetic observations. Using this approach, our attack exposes substantial privacy leakage across a wide range of models and datasets, and in some cases, is even a perfect membership classifier on state-of-the-art models. Our findings highlight a unique privacy vulnerability of LLM-based synthetic data generation and the need for effective defenses. To this end, we propose two methods, including a novel sampling strategy that strategically perturbs digits during generation. Our evaluation demonstrates that this approach can defeat these attacks with minimal loss of fidelity and utility of the synthetic data.

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)