ViNE-GATr: scaling geometric algebra transformers with virtual nodes embeddings

Published: 06 Mar 2025, Last Modified: 09 Apr 2025ICLR 2025 Workshop MLMP PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Short paper
Keywords: Geometric deep learning, Transformers, Virtual nodes learning, Equivariance
Abstract:

Equivariant neural networks can effectively model physical systems by naturally handling the underlying geometric quantities and preserving their symmetries, but scaling them to large geometric data remains challenging. Naive downsampling typically disrupts features’ transformation laws, limiting their applicability in large scale settings. In this work, we propose a scalable equivariant transformer that efficiently processes geometric data in a coarse-grained latent space while preserving E(3) symmetries of the problem. In particular, by building on the Geometric Algebra Transformer (GATr) and PerceiverIO architectures, our method learns equivariant latent tokens which allow us to decouple the processing complexity from the input data representation while maintaining global equivariance.

Presenter: Thomas Hehn
Submission Number: 23
Loading