Oops!... They Stole it Again: Attacks on Split Learning
By: Tanveer Khan, Antonis Michalas
Potential Business Impact:
Keeps your private data safe during learning.
Split Learning (SL) is a collaborative learning approach that improves privacy by keeping data on the client-side while sharing only the intermediate output with a server. However, the distributed nature of SL introduces new security challenges, necessitating a comprehensive exploration of potential attacks. This paper systematically reviews various attacks on SL, classifying them based on factors such as the attacker's role, the type of privacy risks, when data leaks occur, and where vulnerabilities exist. We also analyze existing defense methods, including cryptographic methods, data modification approaches, distributed techniques, and hybrid solutions. Our findings reveal security gaps, highlighting the effectiveness and limitations of existing defenses. By identifying open challenges and future directions, this work provides valuable information to improve SL privacy issues and guide further research.
Similar Papers
A Taxonomy of Attacks and Defenses in Split Learning
Cryptography and Security
Protects private data when computers share learning.
P3SL: Personalized Privacy-Preserving Split Learning on Heterogeneous Edge Devices
Machine Learning (CS)
Lets phones learn without sharing private info.
Pigeon-SL: Robust Split Learning Framework for Edge Intelligence under Malicious Clients
Machine Learning (CS)
Keeps AI learning safe from bad data.