Learning unfolded networks with a cyclic group structureDownload PDF

Published: 07 Nov 2022, Last Modified: 17 Sept 2023NeurReps 2022 PosterReaders: Everyone
Keywords: Equivariance, model-based learning, cyclic groups, unfolded networks
TL;DR: We propose equivariant unfolded networks where the weights of each layer are governed by a cyclic group structure.
Abstract: Deep neural networks lack straightforward ways to incorporate domain knowledge and are notoriously considered black boxes. Prior works attempted to inject domain knowledge into architectures implicitly through data augmentation. Building on recent advances on equivariant neural networks, we propose networks that explicitly encode domain knowledge, specifically equivariance with respect to rotations. By using unfolded architectures, a rich framework that originated from sparse coding and has theoretical guarantees, we present interpretable networks with sparse activations. The equivariant unfolded networks compete favorably with baselines, with only a fraction of their parameters, as showcased on (rotated) MNIST and CIFAR-10.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2211.09238/code)
4 Replies