Abstract: Many applications require explainable node classification in
knowledge graphs. Towards this end, a popular “white-box” approach
is class expression learning: Given sets of positive and negative nodes,
class expressions in description logics are learned that separate positive
from negative nodes. Most existing approaches are search-based
approaches generating many candidate class expressions and selecting
the best one. However, they often take a long time to find suitable class
expressions. In this paper, we cast class expression learning as a translation
problem and propose a new family of class expression learning
approaches which we dub neural class expression synthesizers. Training
examples are “translated” into class expressions in a fashion akin
to machine translation. Consequently, our synthesizers are not subject
to the runtime limitations of search-based approaches. We study three
instances of this novel family of approaches based on LSTMs, GRUs,
and set transformers, respectively. An evaluation of our approach on
four benchmark datasets suggests that it can effectively synthesize highquality
class expressions with respect to the input examples in approximately
one second on average. Moreover, a comparison to state-of-the-art
approaches suggests that we achieve better F-measures on large datasets.
For reproducibility purposes, we provide our implementation as well as
pretrained models in our public GitHub repository at https://github.
com/dice-group/NeuralClassExpressionSynthesis
0 Replies
Loading