When Random Is Bad: Selective CRPs for Protecting PUFs Against Modeling Attacks

Published: 01 Jan 2025, Last Modified: 19 May 2025IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Resource-constrains are a significant challenge when designing secure IoT devices. To address this problem, the physical unclonable function (PUF) has been proposed as a lightweight security primitive capable of hardware fingerprinting. PUFs can provide device identification capabilities by exploiting random manufacturing variations, which can be used for authentication with a verifier that identifies a device through challenge-response interactions with its PUF. However, extensive research has shown that PUFs are inherently vulnerable to machine learning (ML) modeling attacks. Such attacks use challenge-response samples to train ML algorithms to learn the underlying parameters that define the physical PUF. In this article, we present a defensive technique to be used by the verifier called selective challenge-response pairs (CRPs). We propose generating challenges selectively, instead of randomly, to negatively affect the parameters of ML models trained by attackers. Specifically, we provide three methods: 1) binary-coded with padding (BP); 2) random shifted pattern (RSP); and 3) binary shifted pattern (BSP). We characterize them based on Hamming distance patterns, and evaluate their applicability based on their effect on the uniqueness, uniformity, and reliability of the underlying PUF implementation. Furthermore, we analyze and compare their resilience to ML modeling with the traditional random challenges on the well-studied XOR PUF, feed-forward PUF, and lightweight secure PUF, showing improved resilience of up to 2 times the number of CRPs. Finally, we suggest using our method on the interpose PUF to counter reliability-based attacks which can overcome selective CRPs and show that up to 4 times the number of CRPs can be exchanged securely.
Loading