Reliable Learning by Subsuming a Trusted Model: Safe Exploration of the Space of Complex Models
Abstract: Designing machine learning algorithms that are reliable, safe, and trustworthy is an important factor when using predictions to make critical decisions in real-world applications including healthcare, law, and self-driving cars. A fundamental challenge faced by a practitioner is how to trade-off higher accuracy of a complex model with more reliability of a simpler, trusted model. In this paper, we propose a novel learning frame-work to tackle this challenge—our key idea is to safely explore the space of complex models by subsuming a base model which is already trusted. We achieve this via enforcing a regularization constraint in the learning process of the complex model based on the predictions of a trusted model. Our approach is generic, allowing us to consider different trusted models and different ways to enforce the regularization constraint. We demonstrate these ideas via experiments using synthetic and real-world datasets.
0 Replies
Loading