Concept whitening for interpretable image recognitionDownload PDFOpen Website

2020 (modified: 15 Nov 2022)Nat. Mach. Intell. 2020Readers: Everyone
Abstract: There is much interest in ‘explainable’ AI, but most efforts concern post hoc methods. Instead, a neural network can be made inherently interpretable, with an approach that involves making human-understandable concepts (aeroplane, bed, lamp and so on) align along the axes of its latent space.
0 Replies

Loading