In-Context Bias Propagation in LLM-Based Tabular Data Generation
By: Pol G. Recasens , Alberto Gutierrez , Jordi Torres and more
Potential Business Impact:
AI can accidentally create unfair data.
Large Language Models (LLMs) are increasingly used for synthetic tabular data generation through in-context learning (ICL), offering a practical solution for data augmentation in data scarce scenarios. While prior work has shown the potential of LLMs to improve downstream task performance through augmenting underrepresented groups, these benefits often assume access to a subset of unbiased in-context examples, representative of the real dataset. In real-world settings, however, data is frequently noisy and demographically skewed. In this paper, we systematically study how statistical biases within in-context examples propagate to the distribution of synthetic tabular data, showing that even mild in-context biases lead to global statistical distortions. We further introduce an adversarial scenario where a malicious contributor can inject bias into the synthetic dataset via a subset of in-context examples, ultimately compromising the fairness of downstream classifiers for a targeted and protected subgroup. Our findings demonstrate a new vulnerability associated with LLM-based data generation pipelines that rely on in-context prompts with in sensitive domains.
Similar Papers
Privacy-Aware In-Context Learning for Large Language Models
Machine Learning (CS)
Keeps private text secret when computers write.
Bridging the Gap: In-Context Learning for Modeling Human Disagreement
Computation and Language
Helps computers understand when people disagree.
Dual Debiasing for Noisy In-Context Learning for Text Generation
Computation and Language
Finds bad examples to make AI smarter.