Equivariant Mesh Attention Networks

Published: 29 Aug 2022, Last Modified: 30 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Authors that are also TMLR Expert Reviewers: ~Taco_Cohen1
Abstract: Equivariance to symmetries has proven to be a powerful inductive bias in deep learning research. Recent works on mesh processing have concentrated on various kinds of natural symmetries, including translations, rotations, scaling, node permutations, and gauge transformations. To date, no existing architecture is equivariant to all of these transformations. In this paper, we present an attention-based architecture for mesh data that is provably equivariant to all transformations mentioned above. Our pipeline relies on the use of relative tangential features: a simple, effective, equivariance-friendly alternative to raw node positions as inputs. Experiments on the FAUST and TOSCA datasets confirm that our proposed architecture achieves improved performance on these benchmarks and is indeed equivariant, and therefore robust, to a wide variety of local/global transformations.
Certifications: Expert Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: This is a summary of the changes for the camera-ready version based on the reviewer's comments: - Relative tangential features - Adjusted the presentation of RelTan features. Our new formula is equivalent, but is more evocative of the intuition behind our proposed RelTan features. - Added visualizations of RelTan features for special choices of the relative power hyperparameter. - New discussion on the influence of the relative power, and high-level insights for how to choose it in practice. - Equivariant mesh attention - Adjusted the mathematical presentation towards an equivalent, but more intuitive presentation of our attention mechanism. - Transferred Algorithm 1 from the appendix into this section. - Comparison to Gauge Equivariant Transformers - Included a more comprehensive discussion on the shortcomings of GETs and how the use of RelTan features and EMAN layers can alleviate these issues. - Included GEM-CNN/EMAN variants for all experiments using GET features as inputs. - New visualization comparing RelTan and GET features (Appendix H and interactive notebook with 3D meshes) - Paper layout - Edited the ordering and content of some sections aiming for a more cohesive presentation of our work. - Improved tabular display for our experimental results. - Fixed typos and other minor writing issues.
Code: https://github.com/gallego-posada/eman
Assigned Action Editor: ~Tie-Yan_Liu1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 112
Loading