Abstract: The success of Deep Neural Networks (DNNs) highly depends on data quality. Moreover, predictive uncertainty reduces reliability of DNNs for real-world applications. In this paper, we aim to address these two issues by proposing a unified filtering framework leveraging underlying data density, that effectively denoises training data as well as avoids predicting confusing samples. Our proposed framework differentiates noise from clean data samples without modifying existing DNN architectures or loss functions. Extensive experiments on multiple benchmark datasets and recent COVIDx dataset demonstrate the effectiveness of our framework over state-of-the-art (SOTA) methods in denoising training data and abstaining uncertain test data.
0 Replies
Loading