OUT-OF-DISTRIBUTION DETECTION IN MACHINE- LEARNING BASED SYSTEMS ENABLED BY TINYML

18 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: TinyML, Out-Of-Distribution Detection, Ma- chine Learning Based Systems, Deep neural networks
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Tiny Machine Learning (TinyML) has emerged as a promising approach for in- corporating Machine Learning (ML) into resource-constrained Internet of Things (IoT) devices. However, the existing TinyML models face challenges in effec- tively handling out-of-distribution (OOD) inputs. While various high-accuracy methods for detecting OOD inputs have been developed, they often overlook the constraints posed by the deployment environment. In this paper, we introduce an innovative and efficient out-of-distribution detection method tailored for TinyML. TinyML is an up-and-coming initiative aimed at integrating ML into devices with limited computational resources. This endeavor holds great potential to revolu- tionize application domains reliant on embedded command-and-control systems, allowing them to harness ”ML intelligence” within their decision-making pro- cesses. We propose a novel framework called multi-level out-of-distribution de- tection, which leverages intermediate classifier outputs to dynamically and effi- ciently infer OOD inputs. We establish a direct correlation between the complex- ity of OOD data and the optimal exit level, demonstrating that easily detectable OOD examples can be identified early on without delving into deeper layers. Our architecture comprises a DNN with a final Gaussian layer combined with the log likelihood ratio statistical test and an additional output neuron dedicated to OOD detection. Instead of relying on actual OOD data, we devise a novel method to create artificial OOD samples from in-distribution data, used to train our OOD de- tector neuron adjusted energy score facilitates the distinction of OOD examples at each exit, proving empirically and theoretically suitable for networks employing multiple classifiers. We extensively evaluate framework across 10 OOD datasets spanning a diverse range of complexities. Our results not only demonstrate achiev- ing state-of-the-art performance but also highlight speed and applicability to real- world scenarios.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1313
Loading