When Tables Leak: Attacking String Memorization in LLM-Based Tabular Data Generation
By: Joshua Ward , Bochao Gu , Chi-Hua Wang and more
Potential Business Impact:
AI makes fake data that accidentally reveals real secrets.
Large Language Models (LLMs) have recently demonstrated remarkable performance in generating high-quality tabular synthetic data. In practice, two primary approaches have emerged for adapting LLMs to tabular data generation: (i) fine-tuning smaller models directly on tabular datasets, and (ii) prompting larger models with examples provided in context. In this work, we show that popular implementations from both regimes exhibit a tendency to compromise privacy by reproducing memorized patterns of numeric digits from their training data. To systematically analyze this risk, we introduce a simple No-box Membership Inference Attack (MIA) called LevAtt that assumes adversarial access to only the generated synthetic data and targets the string sequences of numeric digits in synthetic observations. Using this approach, our attack exposes substantial privacy leakage across a wide range of models and datasets, and in some cases, is even a perfect membership classifier on state-of-the-art models. Our findings highlight a unique privacy vulnerability of LLM-based synthetic data generation and the need for effective defenses. To this end, we propose two methods, including a novel sampling strategy that strategically perturbs digits during generation. Our evaluation demonstrates that this approach can defeat these attacks with minimal loss of fidelity and utility of the synthetic data.
Similar Papers
Membership Inference over Diffusion-models-based Synthetic Tabular Data
Cryptography and Security
Protects private data when making fake data.
Privacy Auditing Synthetic Data Release through Local Likelihood Attacks
Machine Learning (CS)
Finds hidden private info in fake data.
Synth-MIA: A Testbed for Auditing Privacy Leakage in Tabular Data Synthesis
Cryptography and Security
Finds hidden secrets in fake data.