A Case Study of Low Ranked Self-Expressive Structures in Neural Network Representations

TMLR Paper1481 Authors

17 Aug 2023 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Understanding Neural Networks by studying their underlying geometry can help us design more robust training methodologies and better architectures. In this work we approach this problem via the lens of subspace clustering, where each input in the representation space is represented as a linear combination of some other set of inputs. Such structures are called self-expressive structures and in this work we analyse their evolution and juxtapose their ability with linear classifiers. We also perform an analysis of the subspace clustering based analysis and compare with other analysis tools to demonstrate how they are related but different formulations of one another. Additionally, we monitor the evolution of self-expressive structures in networks trained to memorise parts of their training data and how they differ from networks which generalise well. Next, as a step to test the limitations of the proposed subspace clustering approach and other linear probing methodologies in the literature we perform a similar set of tests on networks with non-linear activation functions and demonstrate weakness of linear structures in differentiating between models with generalisation and memorisation. Finally, we analyse the relationship between networks trained over cross entropy loss and subspace separation loss to better understand how self expressive structures emerge in neural networks just trained to classify data.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Jeffrey_Pennington1
Submission Number: 1481
Loading