A Privacy-Preserving Cloud Architecture for Distributed Machine Learning at Scale
By: Vinoth Punniyamoorthy , Ashok Gadi Parthi , Mayilsamy Palanigounder and more
Potential Business Impact:
Keeps private data safe when computers learn together.
Distributed machine learning systems require strong privacy guarantees, verifiable compliance, and scalable deploy- ment across heterogeneous and multi-cloud environments. This work introduces a cloud-native privacy-preserving architecture that integrates federated learning, differential privacy, zero- knowledge compliance proofs, and adaptive governance powered by reinforcement learning. The framework supports secure model training and inference without centralizing sensitive data, while enabling cryptographically verifiable policy enforcement across institutions and cloud platforms. A full prototype deployed across hybrid Kubernetes clusters demonstrates reduced membership- inference risk, consistent enforcement of formal privacy budgets, and stable model performance under differential privacy. Ex- perimental evaluation across multi-institution workloads shows that the architecture maintains utility with minimal overhead while providing continuous, risk-aware governance. The pro- posed framework establishes a practical foundation for deploying trustworthy and compliant distributed machine learning systems at scale.
Similar Papers
Experiences Building Enterprise-Level Privacy-Preserving Federated Learning to Power AI for Science
Distributed, Parallel, and Cluster Computing
Lets AI learn from private data safely.
Research on Large Language Model Cross-Cloud Privacy Protection and Collaborative Training based on Federated Learning
Cryptography and Security
Keeps private data safe when computers share learning.
A Privacy-Preserving Ecosystem for Developing Machine Learning Algorithms Using Patient Data: Insights from the TUM.ai Makeathon
Distributed, Parallel, and Cluster Computing
Helps doctors train AI without seeing patient data.