Learning with Domain Knowledge to Develop Justifiable Convolutional Networks

Published: 01 Jan 2022, Last Modified: 24 Jun 2025ACML 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The inherent structure of the Convolutional Neural Networks (CNN) allows them to extract features that are highly correlated with the classes while performing image classification. However, it may happen that the extracted features are merely coincidental and may not be justifiable from a human perspective. For example, from a set of images of cows on grassland, CNN can erroneously extract grass as the feature of the class cow. There are two main limitations to this kind of learning: firstly, in many false-negative cases, correct features will not be used, and secondly, in false-positive cases the system will lack accountability. There is no implicit way to inform CNN to learn the features that are justifiable from a human perspective to resolve these issues. In this paper, we argue that if we provide domain knowledge to guide the learning process of CNN, it is possible to reliably learn the justifiable features. We propose a systematic yet simple mechanism to incorporate domain knowledge to guide the learning process of the CNNs to extract justifiable features. The flip side is that it needs additional input. However, we have shown that even with minimal additional input our method can effectively propagate the knowledge within a class during training. We demonstrate that justifiable features not only enhance accuracy but also demand less amount of data and training time. Moreover, we also show that the proposed method is more robust against perturbational changes in the input images.
Loading