Keywords: common sense knowledge graphs, hybrid reasoning
TL;DR: There is a major omission in most of large modern commonsense knowledge graphs and other automatically constructed knowledge graphs that needs attention: the lack of negative knowledge makes them useless for reasoning and checking believability.
Abstract: In recent years, a significant number of large-scale commonsense knowledge graphs were developed using crowdsourcing and extracting data from natural-language texts. They are successfully used in several areas, but their usage was mostly restricted to information search and retrieval and increasing the accuracy of other methods. The ability to make conclusions using software reasoners over these knowledge graphs is limited: they contain more information than knowledge. In this work, I analyze these shortcomings and their causes, focusing on the types of knowledge that were missed during knowledge acquisition - the negative knowledge - and consider the ways to overcome these problems by enhancing common sense knowledge graphs. Drawing parallels with human intelligence, these enhanced graphs can be used in hybrid architectures with neural networks to develop trustworthy AI systems. Discussed concerns can be useful for automatically-constructed domain knowledge graphs as well.