Interpretable Active LearningDownload PDF

17 Jun 2017 (modified: 19 Jun 2017)ICML 2017 WHI SubmissionReaders: Everyone
Abstract: Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. We demonstrate how LIME can be used to generate locally faithful explanations for an active learning strategy, and how these explanations can be used to understand how different models and datasets explore a problem space over time. We propose a measure for uncertainty bias based on disparate impact that allows further exploration of the relative exploitation of different data subgroups. We combine the LIME framework with the uncertainty bias metric to demonstrate how clusters of unlabeled points can be made automatically based on common sources of uncertainty. We show that this allows for an interpretable explanation of what an active learning algorithm is learning as points with similar sources of uncertainty have their uncertainty bias resolved.
TL;DR: Attempt to analyze trends in the concepts active learning is exploring
Keywords: Active learning, interpretable machine learning
8 Replies

Loading