Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction DetectionDownload PDF

Sep 25, 2019 (edited Mar 11, 2020)ICLR 2020 Conference Blind SubmissionReaders: Everyone
  • Original Pdf: pdf
  • Keywords: Feature Interaction, Interpretability, Black Box, AutoML
  • TL;DR: Proposed methods to extract and leverage interpretations of feature interactions
  • Abstract: Recommendation is a prevalent application of machine learning that affects many users; therefore, it is important for recommender models to be accurate and interpretable. In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems. In particular, we propose to interpret feature interactions from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes. By not assuming the structure of the recommender system, our approach can be used in general settings. In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction. We found that our interaction interpretations are both informative and predictive, e.g., significantly outperforming existing recommender models. What's more, the same approach to interpret interactions can provide new insights into domains even beyond recommendation, such as text and image classification.
  • Code: https://github.com/mtsang/interaction_interpretability
7 Replies