A Cognitive Framework for Learning Debiased and Interpretable Representations via Debiasing Global Workspace

Published: 10 Oct 2024, Last Modified: 04 Nov 2024UniRepsEveryoneRevisionsBibTeXCC BY 4.0
Supplementary Material: zip
Track: Proceedings Track
Keywords: Global Workspace Theory, Debiasing Methods, Explainable AI, Cognitive Science
TL;DR: Inspired by global workspace theory, we propose a novel debiasing framework, Debiasing Global Workspace, that learns debiased and interpretable representations of attributes without defining specific bias types.
Abstract: When trained on biased datasets, Deep Neural Networks (DNNs) often make predictions based on attributes derived from features spuriously correlated with the target labels. This is especially problematic if these irrelevant features are easier for the model to learn than the truly relevant ones. Many existing approaches, called debiasing methods, have been proposed to address this issue, but they often require predefined bias labels and entail significantly increased computational complexity by incorporating extra auxiliary models. Instead, we provide an orthogonal perspective from the existing approaches, inspired by cognitive science, specifically Global Workspace Theory (GWT). Our method, Debiasing Global Workspace (DGW), is a novel debiasing framework that consists of specialized modules and a shared workspace, allowing for increased modularity and improved debiasing performance. Additionally, DGW enhances the transparency of decision-making processes by visualizing which features of the inputs the model focuses on during training and inference through attention masks. We begin by proposing an instantiation of GWT for the debiasing method. We then outline the implementation of each component within DGW. At the end, we validate our method across various biased datasets, proving its effectiveness in mitigating biases and improving model performance.
Submission Number: 4
Loading