Abstract: Rapid advancements in deep learning have led to many recent breakthroughs.
While deep learning models achieve superior performance, often statistically better than humans, their adoption into safety-critical settings, such
as healthcare or self-driving cars is hindered by their inability to provide
safety guarantees or to expose the inner workings of the model in a human
understandable form. We present MoET¨ , a novel model based on Mixture
of Experts, consisting of decision tree experts and a generalized linear model
gating function. Thanks to such gating function the model is more expressive
than the standard decision tree. To support non-differentiable decision trees
as experts, we formulate a novel training procedure. In addition, we introduce a hard thresholding version, MoET¨ h, in which predictions are made
solely by a single expert chosen via the gating function. Thanks to that
property, MoET¨ h allows each prediction to be easily decomposed into a set
of logical rules in a form which can be easily verified. While MoET¨ is a
general use model, we illustrate its power in the reinforcement learning setting. By training MoET¨ models using an imitation learning procedure on
deep RL agents we outperform the previous state-of-the-art technique based
on decision trees while preserving the verifiability of the models. Moreover,
we show that MoET¨ can also be used in real-world supervised problems on
which it outperforms other verifiable machine learning models.
0 Replies
Loading