Handling Missing Data in Decision Trees: A Probabilistic ApproachDownload PDF

Jun 10, 2020 (edited Jul 12, 2020)ICML 2020 Workshop Artemiss SubmissionReaders: Everyone
  • Keywords: missing data, decision trees, probabilistic reasoning, probabilistic circuits
  • TL;DR: We handle missing data in decision trees by computing expected prediction w.r.t. tractable density estimators.
  • Abstract: Decision trees are a popular family of models due to their attractive properties such as interpretability and ability to handle heterogeneous data. Concurrently, missing data is a prevalent occurrence that hinders performance of machine learning models. As such, handling missing data in decision trees is a well studied problem. In this paper, we tackle this problem by taking a probabilistic approach. At deployment time, we use tractable density estimators to compute the "expected prediction'' of our models. At learning time, we fine-tune parameters of already learned trees by minimizing their "expected prediction loss'' w.r.t.\ our density estimators. We provide brief experiments showcasing effectiveness of our methods compared to few baselines.
2 Replies