Technical note on Fisher Information for Robust Federated Cross-Validation
By: Behraj Khan, Tahir Qasim Syed
Potential Business Impact:
Fixes AI learning when data is spread out.
When training data are fragmented across batches or federated-learned across different geographic locations, trained models manifest performance degradation. That degradation partly owes to covariate shift induced by data having been fragmented across time and space and producing dissimilar empirical training distributions. Each fragment's distribution is slightly different to a hypothetical unfragmented training distribution of covariates, and to the single validation distribution. To address this problem, we propose Fisher Information for Robust fEderated validation (\textbf{FIRE}). This method accumulates fragmentation-induced covariate shift divergences from the global training distribution via an approximate Fisher information. That term, which we prove to be a more computationally-tractable estimate, is then used as a per-fragment loss penalty, enabling scalable distribution alignment. FIRE outperforms importance weighting benchmarks by $5.1\%$ at maximum and federated learning (FL) benchmarks by up to $5.3\%$ on shifted validation sets.
Similar Papers
Quantifying Privacy Leakage in Split Inference via Fisher-Approximated Shannon Information Analysis
Cryptography and Security
Protects secrets when computers learn together.
On the Fragility of Contribution Score Computation in Federated Learning
Machine Learning (CS)
Protects fair rewards when computers learn together.
Benchmarking Mutual Information-based Loss Functions in Federated Learning
Machine Learning (CS)
Makes AI fairer for everyone using less data.