Score: 1

Unified Framework for Qualifying Security Boundary of PUFs Against Machine Learning Attacks

Published: January 8, 2026 | arXiv ID: 2601.04697v1

By: Hongming Fei , Zilong Hu , Prosanta Gope and more

Potential Business Impact:

Makes computer security harder to hack.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

Physical Unclonable Functions (PUFs) serve as lightweight, hardware-intrinsic entropy sources widely deployed in IoT security applications. However, delay-based PUFs are vulnerable to Machine Learning Attacks (MLAs), undermining their assumed unclonability. There are no valid metrics for evaluating PUF MLA resistance, but empirical modelling experiments, which lack theoretical guarantees and are highly sensitive to advances in machine learning techniques. To address the fundamental gap between PUF designs and security qualifications, this work proposes a novel, formal, and unified framework for evaluating PUF security against modelling attacks by providing security lower bounds, independent of specific attack models or learning algorithms. We mathematically characterise the adversary's advantage in predicting responses to unseen challenges based solely on observed challenge-response pairs (CRPs), formulating the problem as a conditional probability estimation over the space of candidate PUFs. We present our analysis on previous "broken" PUFs, e.g., Arbiter PUFs, XOR PUFs, Feed-Forward PUFs, and for the first time compare their MLA resistance in a formal way. In addition, we evaluate the currently "secure" CT PUF, and show its security boundary. We demonstrate that the proposed approach systematically quantifies PUF resilience, captures subtle security differences, and provides actionable, theoretically grounded security guarantees for the practical deployment of PUFs.

Country of Origin
πŸ‡¬πŸ‡§ πŸ‡ΈπŸ‡¬ United Kingdom, Singapore

Page Count
13 pages

Category
Computer Science:
Cryptography and Security