TrajSyn: Privacy-Preserving Dataset Distillation from Federated Model Trajectories for Server-Side Adversarial Training
By: Mukur Gupta , Niharika Gupta , Saifur Rahman and more
Deep learning models deployed on edge devices are increasingly used in safety-critical applications. However, their vulnerability to adversarial perturbations poses significant risks, especially in Federated Learning (FL) settings where identical models are distributed across thousands of clients. While adversarial training is a strong defense, it is difficult to apply in FL due to strict client-data privacy constraints and the limited compute available on edge devices. In this work, we introduce TrajSyn, a privacy-preserving framework that enables effective server-side adversarial training by synthesizing a proxy dataset from the trajectories of client model updates, without accessing raw client data. We show that TrajSyn consistently improves adversarial robustness on image classification benchmarks with no extra compute burden on the client device.
Similar Papers
Personalized Federated Training of Diffusion Models with Privacy Guarantees
Machine Learning (CS)
Creates private, fair data for AI.
FedTDP: A Privacy-Preserving and Unified Framework for Trajectory Data Preparation via Federated Learning
Machine Learning (CS)
Cleans movement data without sharing private details.
Experiences Building Enterprise-Level Privacy-Preserving Federated Learning to Power AI for Science
Distributed, Parallel, and Cluster Computing
Lets AI learn from private data safely.