Abstract: How could humans better teach, understand, and communicate with artificial neural networks, to correct some mistakes and to learn new knowledge? Currently, network reasoning is mostly opaque. Attempts at modifying it are usually through costly addition of new labeled data and retraining, with no guarantee that the desired improvement will be achieved. Here, we develop a framework that allows humans to understand the reasoning logic of a network easily and intuitively, in graphical form. We provide means for humans to leverage their broader contextual knowledge, common sense, and causal inference abilities: they simply inspect and modify the graph as needed, to correct any underlying flawed network reasoning. We then automatically merge and distill the modified knowledge back into the original network. The improved network can exactly replace the original, but performs better thanks to human teaching. We show viability of the approach on large-scale image classification and zero-shot learning tasks.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Renjie_Liao1
Submission Number: 6382
Loading