Topological Uncertainty: Monitoring trained neural networks through persistence of activation graphsDownload PDF

25 Aug 2022OpenReview Archive Direct UploadReaders: Everyone
Abstract: Although neural networks are capable of reaching astonishing performances on a wide variety of con- texts, properly training networks on complicated tasks requires expertise and can be expensive from a computational perspective. In industrial appli- cations, data coming from an open-world setting might widely differ from the benchmark datasets on which a network was trained. Being able to monitor the presence of such variations without retraining the network is of crucial importance. In this article, we develop a method to monitor trained neural net- works based on the topological properties of their activation graphs. To each new observation, we as- sign a Topological Uncertainty, a score that aims to assess the reliability of the predictions by investi- gating the whole network instead of its final layer only, as typically done by practitioners. Our ap- proach entirely works at a post-training level and does not require any assumption on the network architecture, optimization scheme, nor the use of data augmentation or auxiliary datasets; and can be faithfully applied on a large range of network ar- chitectures and data types. We showcase experi- mentally the potential of Topological Uncertainty in the context of trained network selection, Out-Of- Distribution detection, and shift-detection, both on synthetic and real datasets of images and graphs.
0 Replies

Loading