Abstract: Cellular automata (CAs) are notable computational models exhibiting rich dynamics emerging from the local interaction of cells arranged in a regular lattice.
Graph CAs (GCAs) generalise standard CAs by allowing for arbitrary graphs rather than regular lattices, similar to how Graph Neural Networks (GNNs) generalise Convolutional NNs.
Recently, Graph Neural CAs (GNCAs) have been proposed as models built on top of standard GNNs that can be trained to approximate the transition rule of any arbitrary GCA.
We note that existing GNCAs can violate the locality principle of CAs by leveraging global information and, furthermore, are anisotropic in the sense that their transition rules are not equivariant to isometries of the nodes' spatial locations.
However, it is desirable for instances related by such transformations to be treated identically by the model.
By replacing standard graph convolutions with E(n)-equivariant ones, we avoid anisotropy by design and propose a class of isotropic automata that we call E(n)-GNCAs.
These models are lightweight, but can nevertheless handle large graphs, capture complex dynamics and exhibit emergent self-organising behaviours.
We showcase the broad and successful applicability of E(n)-GNCAs on three different tasks: (i) isotropic pattern formation, (ii) graph auto-encoding, and (iii) simulation of E(n)-equivariant dynamical systems.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: All changes are highlighted in blue. In particular:
- we replaced “rigid transformation” with “isometry”
- we replaced “relative distances” with “relative positions"
- we better introduced the graph-autoencoding task & added comparison with 30-layered (E)GNNs
- we elaborated more about using global information in the loss functions
- we elaborated more about the properties of Eq. 13 and 15
Code: https://github.com/gengala/egnca
Supplementary Material: zip
Assigned Action Editor: ~Benjamin_Guedj1
Submission Number: 2022
Loading