Score: 0

Adversarial Attacks Against Deep Learning-Based Radio Frequency Fingerprint Identification

Published: December 12, 2025 | arXiv ID: 2512.12002v1

By: Jie Ma , Junqing Zhang , Guanxiong Shen and more

Potential Business Impact:

Tricks devices into thinking they are someone else.

Business Areas:
RFID Hardware

Radio frequency fingerprint identification (RFFI) is an emerging technique for the lightweight authentication of wireless Internet of things (IoT) devices. RFFI exploits deep learning models to extract hardware impairments to uniquely identify wireless devices. Recent studies show deep learning-based RFFI is vulnerable to adversarial attacks. However, effective adversarial attacks against different types of RFFI classifiers have not yet been explored. In this paper, we carried out a comprehensive investigations into different adversarial attack methods on RFFI systems using various deep learning models. Three specific algorithms, fast gradient sign method (FGSM), projected gradient descent (PGD), and universal adversarial perturbation (UAP), were analyzed. The attacks were launched to LoRa-RFFI and the experimental results showed the generated perturbations were effective against convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and gated recurrent units (GRU). We further used UAP to launch practical attacks. Special factors were considered for the wireless context, including implementing real-time attacks, the effectiveness of the attacks over a period of time, etc. Our experimental evaluation demonstrated that UAP can successfully launch adversarial attacks against the RFFI, achieving a success rate of 81.7% when the adversary almost has no prior knowledge of the victim RFFI systems.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
14 pages

Category
Computer Science:
Cryptography and Security