FLARE: A Wireless Side-Channel Fingerprinting Attack on Federated Learning
By: Md Nahid Hasan Shuvo , Moinul Hossain , Anik Mallik and more
Potential Business Impact:
Finds hidden computer brain designs from wireless signals.
Federated Learning (FL) enables collaborative model training across distributed devices while safeguarding data and user privacy. However, FL remains susceptible to privacy threats that can compromise data via direct means. That said, indirectly compromising the confidentiality of the FL model architecture (e.g., a convolutional neural network (CNN) or a recurrent neural network (RNN)) on a client device by an outsider remains unexplored. If leaked, this information can enable next-level attacks tailored to the architecture. This paper proposes a novel side-channel fingerprinting attack, leveraging flow-level and packet-level statistics of encrypted wireless traffic from an FL client to infer its deep learning model architecture. We name it FLARE, a fingerprinting framework based on FL Architecture REconnaissance. Evaluation across various CNN and RNN variants-including pre-trained and custom models trained over IEEE 802.11 Wi-Fi-shows that FLARE achieves over 98% F1-score in closed-world and up to 91% in open-world scenarios. These results reveal that CNN and RNN models leak distinguishable traffic patterns, enabling architecture fingerprinting even under realistic FL settings with hardware, software, and data heterogeneity. To our knowledge, this is the first work to fingerprint FL model architectures by sniffing encrypted wireless traffic, exposing a critical side-channel vulnerability in current FL systems.
Similar Papers
Fingerprinting Deep Learning Models via Network Traffic Patterns in Federated Learning
Machine Learning (CS)
Lets hackers guess computer models from network traffic.
FLARE: Adaptive Multi-Dimensional Reputation for Robust Client Reliability in Federated Learning
Machine Learning (CS)
Keeps AI learning safe from bad data.
\textit{FLARE}: Adaptive Multi-Dimensional Reputation for Robust Client Reliability in Federated Learning
Machine Learning (CS)
Protects AI learning from bad data.