Abstract: One of the key purposes of eXplainable AI (XAI) is to develop techniques for understanding predictions made by Machine Learning (ML) models and for assessing how reliable they are. Several encoding schemas have recently been
pointed out, showing how ML classifiers of various types can
be mapped to Boolean circuits exhibiting the same input-output behaviors. Thanks to such mappings, XAI queries
about classifiers can be delegated to the corresponding circuits. In this paper, we present some explanation queries and
verification queries about classifiers. We show how they can
be addressed by combining queries and transformations about
the associated Boolean circuits. Taking advantage of previous results from the knowledge compilation map, this allows
us to identify a number of cases for which XAI queries are tractable provided that the circuit has been first turned into a
compiled representation.
0 Replies
Loading