Preventing Dimensional Collapse in Contrastive Local Learning with Subsampling

Published: 16 Jun 2023, Last Modified: 17 Jul 2023ICML LLW 2023EveryoneRevisionsBibTeX
Keywords: Decoupled Learning, Local Learning, Self-supervised Learning, Representation Collapse
TL;DR: We adress the challenge of training DNN with local learning with more than four blocks in self-supervised learning with a simple feature-similarity-based sampling method, preventing a dimensional collapse.
Abstract: This paper presents an investigation of the challenges of training Deep Neural Networks (DNNs) via self-supervised objectives, using local learning as a parallelizable alternative to traditional backpropagation. In our approach, DNN are segmented into distinct blocks, each updated independently via gradients provided by small local auxiliary Neural Networks (NNs). Despite the evident computational benefits, extensive splits often result in performance degradation. Through analysis of a synthetic example, we identify a layer-wise dimensional collapse as a major factor behind such performance losses. To counter this, we propose a novel and straightforward sampling strategy based on blockwise feature-similarity, explicitly designed to evade such dimensional collapse.
Submission Number: 13
Loading