Abstract: Explainable AI (XAI) techniques have become
popular for multiple use-cases in the past few
years. Here we consider its use in studying model
predictions to gather additional training data. We
argue that this is equivalent to Active Learning,
where the query strategy involves a human-in-the-
loop. We provide a mathematical approximation
for the role of the human, and present a general
formalization of the end-to-end workflow. This
enables us to rigorously compare this use with
standard Active Learning algorithms, while al-
lowing for extensions to the workflow. An added
benefit is that their utility can be assessed via
simulation instead of conducting expensive user-
studies. We also present some initial promising
results.
Loading