PRIVEE: Privacy-Preserving Vertical Federated Learning Against Feature Inference Attacks
By: Sindhuja Madabushi , Ahmad Faraz Khan , Haider Ali and more
Vertical Federated Learning (VFL) enables collaborative model training across organizations that share common user samples but hold disjoint feature spaces. Despite its potential, VFL is susceptible to feature inference attacks, in which adversarial parties exploit shared confidence scores (i.e., prediction probabilities) during inference to reconstruct private input features of other participants. To counter this threat, we propose PRIVEE (PRIvacy-preserving Vertical fEderated lEarning), a novel defense mechanism named after the French word privée, meaning "private." PRIVEE obfuscates confidence scores while preserving critical properties such as relative ranking and inter-score distances. Rather than exposing raw scores, PRIVEE shares only the transformed representations, mitigating the risk of reconstruction attacks without degrading model prediction accuracy. Extensive experiments show that PRIVEE achieves a threefold improvement in privacy protection compared to state-of-the-art defenses, while preserving full predictive performance against advanced feature inference attacks.
Similar Papers
VFEFL: Privacy-Preserving Federated Learning against Malicious Clients via Verifiable Functional Encryption
Cryptography and Security
Keeps your private data safe when computers learn together.
Data Privatization in Vertical Federated Learning with Client-wise Missing Problem
Methodology
Keeps private data safe when learning from many sources.
HybridVFL: Disentangled Feature Learning for Edge-Enabled Vertical Federated Multimodal Classification
Machine Learning (CS)
Lets phones learn health secrets without sharing them.