Bridging the Generalisation Gap: Synthetic Data Generation for Multi-Site Clinical Model Validation
By: Bradley Segal , Joshua Fieggen , David Clifton and more
Potential Business Impact:
Makes medical AI work everywhere, fairly.
Ensuring the generalisability of clinical machine learning (ML) models across diverse healthcare settings remains a significant challenge due to variability in patient demographics, disease prevalence, and institutional practices. Existing model evaluation approaches often rely on real-world datasets, which are limited in availability, embed confounding biases, and lack the flexibility needed for systematic experimentation. Furthermore, while generative models aim for statistical realism, they often lack transparency and explicit control over factors driving distributional shifts. In this work, we propose a novel structured synthetic data framework designed for the controlled benchmarking of model robustness, fairness, and generalisability. Unlike approaches focused solely on mimicking observed data, our framework provides explicit control over the data generating process, including site-specific prevalence variations, hierarchical subgroup effects, and structured feature interactions. This enables targeted investigation into how models respond to specific distributional shifts and potential biases. Through controlled experiments, we demonstrate the framework's ability to isolate the impact of site variations, support fairness-aware audits, and reveal generalisation failures, particularly highlighting how model complexity interacts with site-specific effects. This work contributes a reproducible, interpretable, and configurable tool designed to advance the reliable deployment of ML in clinical settings.
Similar Papers
Synthetic Dataset Evaluation Based on Generalized Cross Validation
CV and Pattern Recognition
Tests how well fake data works like real data.
Boosting Statistic Learning with Synthetic Data from Pretrained Large Models
Machine Learning (Stat)
Makes computer models learn better with fake data.
Assessing Generative Models for Structured Data
Machine Learning (CS)
Makes fake data that looks like real data.