Processing of synthetic data in AI development for healthcare and the definition of personal data in EU law
By: Vibeke Binz Vallevik , Anne Kjersti C. Befring , Severin Elvatun and more
Potential Business Impact:
Lets doctors share patient info safely for new cures.
Artificial intelligence (AI) has the potential to transform healthcare, but it requires access to health data. Synthetic data that is generated through machine learning models trained on real data, offers a way to share data while preserving privacy. However, uncertainties in the practical application of the General Data Protection Regulation (GDPR) create an administrative burden, limiting the benefits of synthetic data. Through a systematic analysis of relevant legal sources and an empirical study, this article explores whether synthetic data should be classified as personal data under the GDPR. The study investigates the residual identification risk through generating synthetic data and simulating inference attacks, challenging common perceptions of technical identification risk. The findings suggest synthetic data is likely anonymous, depending on certain factors, but highlights uncertainties about what constitutes reasonably likely risk. To promote innovation, the study calls for clearer regulations to balance privacy protection with the advancement of AI in healthcare.
Similar Papers
Synthetic Data for Robust AI Model Development in Regulated Enterprises
Computers and Society
Creates fake data for AI, protecting privacy.
Differentially Private Synthetic Data Generation Using Context-Aware GANs
Machine Learning (CS)
Creates private fake data that follows important rules.
The Synthetic Mirror -- Synthetic Data at the Age of Agentic AI
Computers and Society
Makes AI trustworthy with fake data.