Calibrated Uncertainty Estimation for Trustworthy Deep IoT Attack Detection

Biprodip Pal, Md. Saiful Islam, Alan Wee-Chung Liew

Published: 01 Jan 2025, Last Modified: 06 Nov 2025IEEE Transactions on Dependable and Secure ComputingEveryoneRevisionsCC BY-SA 4.0
Abstract: The rapid proliferation of Internet of Things (IoT) devices has exposed us to a growing threat landscape. Advancements in artificial intelligence (AI), particularly deep learning (DL) techniques, offer innovative solutions for trustworthy IoT attack detection. Despite the resource-constrained nature of IoT devices, most existing AI-based IoT attack detection methods rely on complex deep learning models for automated feature extraction and generalization. However, DL models are inherently overconfident and tend to produce unreliable predictions both in-distribution and under distribution shifts when deployed. The large model size, coupled with deterministic and overconfident predictions, and limited focus on estimating the uncertainty, hinders the trustworthy deployment of DL models in applications where reliability is critical, not just high accuracy. In this paper, we present a framework for confidence calibration and uncertainty estimation of DL models in the context of IoT attack detection. Specifically, we propose a novel framework for the trustworthy design of deep IoT attack detection models, along with a lightweight DL model that achieves state-of-the-art (SoTA) attack detection performance. We demonstrate how both our DL model and other SoTA DL models can be effectively integrated into the proposed framework to ensure trustworthy IoT attack detection. Evaluated on the large-scale benchmark IoT-23 network traffic dataset in terms of calibration, uncertainty, and distribution shift, our approach delivers superior and trustworthy IoT attack detection performance.
Loading