Federated Learning of Quantile Inference under Local Differential Privacy
By: Leheng Cai, Qirui Hu, Shuyuan Wu
Potential Business Impact:
Helps computers learn from private data safely.
In this paper, we investigate federated learning for quantile inference under local differential privacy (LDP). We propose an estimator based on local stochastic gradient descent (SGD), whose local gradients are perturbed via a randomized mechanism with global parameters, making the procedure tolerant of communication and storage constraints without compromising statistical efficiency. Although the quantile loss and its corresponding gradient do not satisfy standard smoothness conditions typically assumed in existing literature, we establish asymptotic normality for our estimator as well as a functional central limit theorem. The proposed method accommodates data heterogeneity and allows each server to operate with an individual privacy budget. Furthermore, we construct confidence intervals for the target value through a self-normalization approach, thereby circumventing the need to estimate additional nuisance parameters. Extensive numerical experiments and real data application validate the theoretical guarantees of the proposed methodology.
Similar Papers
Online Inference for Quantiles by Constant Learning-Rate Stochastic Gradient Descent
Machine Learning (Stat)
Makes computer learning more accurate and reliable.
Online differentially private inference in stochastic gradient descent
Methodology
Keeps your personal data private while learning.
Decentralized Quantile Regression for Feature-Distributed Massive Datasets with Privacy Guarantees
Computation
Protects private data while learning from many computers.