Attention-based Neural Cellular AutomataDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 15 Jan 2023, 17:43NeurIPS 2022 AcceptReaders: Everyone
Keywords: neural cellular automata, cellular automata, vision transformer, transformer, denoising autoencoding, computer vision, deep learning
TL;DR: We introduce a new class of _attention-based_ NCAs formed using a spatially localized—yet globally organized—self-attention scheme, and we introduce an instantiation named _Vision Transformer Cellular Automata (ViTCA)_.
Abstract: Recent extensions of Cellular Automata (CA) have incorporated key ideas from modern deep learning, dramatically extending their capabilities and catalyzing a new family of Neural Cellular Automata (NCA) techniques. Inspired by Transformer-based architectures, our work presents a new class of _attention-based_ NCAs formed using a spatially localized—yet globally organized—self-attention scheme. We introduce an instance of this class named _Vision Transformer Cellular Automata (ViTCA)_. We present quantitative and qualitative results on denoising autoencoding across six benchmark datasets, comparing ViTCA to a U-Net, a U-Net-based CA baseline (UNetCA), and a Vision Transformer (ViT). When comparing across architectures configured to similar parameter complexity, ViTCA architectures yield superior performance across all benchmarks and for nearly every evaluation metric. We present an ablation study on various architectural configurations of ViTCA, an analysis of its effect on cell states, and an investigation on its inductive biases. Finally, we examine its learned representations via linear probes on its converged cell state hidden representations, yielding, on average, superior results when compared to our U-Net, ViT, and UNetCA baselines.
Supplementary Material: zip
28 Replies

Loading