Abstract: The outgrowing application of machine learning methods has raised a discussion in the artificial intelligence community on model transparency. In the center of this discussion is the question of model explanation and interpretability. The genetic programming (GP) community has systematically pointed out as one of the major advantages of GP the fact that it produces models that can be interpreted by humans. However, as other interpretable supervised models, the more complex the model becomes, the less interpretable it is. This work focuses on post-hoc interpretability of GP for symbolic regression. This approach does not explain the process followed by a model to reach a decision. Instead, it justifies the predictions it makes. The proposed approach, named Explanation by Local Approximation (ELA), is simple and model agnostic: it finds the nearest neighbors of the point we want to explain and performs a linear regression using this subset of points. The coefficients of this linear regression are then used to generate a local explanation to the model. Results show that the errors of ELA are similar to those of the regression performed with all points. It also shows that simple visualizations can provide insights to the users about the most relevant attributes.
0 Replies
Loading