Abstract: Over the recent years there is a growing move towards explainable AI (XAI). The widespread use of AI systems in a large variety of applications that support human’s decisions leads to the imperative need for providing explanations regarding the AI system’s functionality. That is, explanations are necessary for earning the user’s trust regarding the AI systems. At the same time, recent legislation such as GDPR regarding data privacy require that any attempt towards explainability shall not disclose private data and information to third-parties. In this work we focus on providing privacy-aware explanations in the realm of team formation scenarios. We propose the means to analyse whether an explanation leads an explainability algorithm to incur in privacy breaches when computing explanation for a user.
Loading