Enhancing Healthcare Model Trustworthiness through Theoretically Guaranteed One-Hidden-Layer CNN PurificationDownload PDF

Published: 07 Mar 2023, Last Modified: 04 Apr 2023ICLR 2023 Workshop TML4H PosterReaders: Everyone
Abstract: The use of Convolutional Neural Networks (CNNs) has brought significant benefits to the healthcare industry, enabling the successful execution of challenging tasks such as disease diagnosis and drug discovery. However, CNNs are vulnerable to various types of noise and attacks, including transmission noise, noisy mediums, truncated operations, and intentional poisoning attacks. To address these challenges, this paper proposes a robust recovery method that removes noise from potentially contaminated CNNs and offers an exact recovery guarantee for one-hidden-layer non-overlapping CNNs with the rectified linear unit (ReLU) activation function. The proposed method can recover both the weights and biases of the CNNs precisely, given some mild assumptions and an overparameterization setting. Our experimental results on synthetic data and the Wisconsin Diagnostic Breast Cancer (WDBC) dataset validate the efficacy of the proposed method. Additionally, we extend the method to eliminate poisoning attacks and demonstrate that it can be used as a defense strategy against malicious model poisoning.
3 Replies