Explainable Attribution using Additive Gaussian Processes

Published: 27 May 2024, Last Modified: 27 May 2024AABI 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Gaussian process, attribution, additive model
Abstract: With the advances of computational power, there has been a rapid development in complex systems to predict certain outputs for industrial problems. Attributing outputs to input features, or output changes to input or system changes has been a critical and challenging problem in many real world applications. In industrial settings, a system could be a chain of large scale models or simulators, or a combination of both that are black-box and hard to interpret. The inputs to a system can change over time due to more information becoming available, the system itself can also be updated as new models or simulators get deployed or retrained. Understanding how system behaviours change provides invaluable insights in the black-box systems and aids for critical downstream business decision making. Attribution is the framework that tackles these problems. However, balancing explainability versus accuracy remains a challenging problem. On one hand, attribution methods based on black-box models are flexible enough to represent the systems but lack explainability; simpler models such as linear regression are interpretable enough, however, they lack the ability to represent the systems well. In this paper, we propose an explainable attribution framework based on additive Gaussian processes that can be applied to three major types of attribution tasks commonly seen in practice, where we demonstrate with a toy example for each use case.
Submission Number: 11
Loading