Keywords: equivariant networks, differential invariants, projective group
Abstract: Equivariant networks enhance model efficiency and generalization by embedding symmetry priors into their architectures.
However, most existing methods, primarily based on group convolutions and steerable convolutions, face significant limitations when dealing with complex transformation groups, particularly the projective group, which plays a crucial role in vision.
In this work, we tackle the challenge by constructing projective equivariant networks based on differential invariants.
Using the moving frame method with a carefully selected cross section tailored for multi-dimensional functions, we derive a complete and concise set of second-order fundamental differential invariants of the projective group.
We provide a rigorous analysis of the properties and transformation relationships of their underlying components, yielding a further simplified and unified set of fundamental differential invariants, which facilitates both theoretical analysis and practical applications.
Building on this foundation, we develop the first deep projective equivariant networks, PDINet, which achieve full projective equivariance without discretizing or sampling the group.
Empirical results on the projectively transformed STL-10 and Imagenette datasets show that PDINet achieves improvements of 11.39\% and 5.66\% in accuracy over the respective standard baselines under out-of-distribution settings, demonstrating its strong generalization to complex geometric transformations.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 26484
Loading