A General and Efficient SE(3)-Equivariant Graph Framework: Encoding Symmetries with Complete Differential Invariants and Frames
Keywords: equivariant graph neural network, geometric deep learning
TL;DR: We introduce CDIF, an equivariant GNN with complete differential invariants and frames to address theoretical and representational limitations of prior scalarization methods, proving simpler, scalable, and superior in diverse dynamic system modeling.
Abstract: Equivariant graph neural networks (Equiv-GNNs) have demonstrated effectiveness in modeling dynamics of multi-object systems by explicitly encoding symmetries. Among them, scalarization-based methods are widely adopted for their computational efficiency, particularly in comparison to high-steerable models. However, most existing scalarization-based approaches rely on empirical design of invariant functions, lacking rigorous theoretical guarantees. Moreover, these methods typically only consider directional information from object positions, neglecting that from higher-order differential components. To address these limitations, we propose a general and efficient SE(3)-equivariant graph framework with **C**omplete **D**ifferential **I**nvariants and **F**rames (CDIF). Specifically, we show how to construct a set of differential invariants to universally express any invariant functions through network layers. Additionally, we illustrate the complete recovery of directional information from the aforementioned invariants via frames that integrate both positional and differential components. Extensive experiments across diverse domains, including molecular dynamics, formation control, motion capture and particle simulation, validate that our method is simple, scalable, and outperforming state-of-the-art baselines.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 16942
Loading