Communication-Efficient and Privacy-Adaptable Mechanism -- a Federated Learning Scheme with Convergence Analysis
By: Chun Hei Michael Shiu, Chih Wei Ling
Federated learning enables multiple parties to jointly train learning models without sharing their own underlying data, offering a practical pathway to privacy-preserving collaboration under data-governance constraints. Continued study of federated learning is essential to address key challenges in it, including communication efficiency and privacy protection between parties. A recent line of work introduced a novel approach called the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM), which achieves both objectives simultaneously. CEPAM leverages the rejection-sampled universal quantizer (RSUQ), a randomized vector quantizer whose quantization error is equivalent to a prescribed noise, which can be tuned to customize privacy protection between parties. In this work, we theoretically analyze the privacy guarantees and convergence properties of CEPAM. Moreover, we assess CEPAM's utility performance through experimental evaluations, including convergence profiles compared with other baselines, and accuracy-privacy trade-offs between different parties.
Similar Papers
Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning
Machine Learning (CS)
Keeps your data private while teaching computers faster.
Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning
Machine Learning (CS)
Keeps private data safe while training AI.
Quantized Rank Reduction: A Communications-Efficient Federated Learning Scheme for Network-Critical Applications
Machine Learning (CS)
Lets phones learn together without sharing secrets.