Examining Interpretable Feature Relationships in Deep Networks for Action recognitionDownload PDF

28 May 2019 (modified: 05 May 2023)Submitted to ICML Deep Phenomena 2019Readers: Everyone
Keywords: network interpretation, action recognition, deep learning
TL;DR: We expand Network Dissection to include action interpretation and examine interpretable feature paths to understand the conceptual hierarchy used to classify an action.
Abstract: A number of recent methods to understand neural networks have focused on quantifying the role of individual features. One such method, NetDissect identifies interpretable features of a model using the Broden dataset of visual semantic labels (colors, materials, textures, objects and scenes). Given the recent rise of a number of action recognition datasets, we propose extending the Broden dataset to include actions to better analyze learned action models. We describe the annotation process, results from interpreting action recognition models on the extended Broden dataset and examine interpretable feature paths to help us understand the conceptual hierarchy used to classify an action.
1 Reply

Loading