Keywords: Hyperbolic neural networks, Implicit equilibrium models, Non-Euclidean geometry
Abstract: Euclidean geometry has long dominated neural networks and deep learning, yet neuroscience reveals a different picture. At the representational level, spatial and mnemonic maps in the brain are naturally organized in hyperbolic geometry, supporting efficient hierarchical embeddings. Hyperbolic neural networks exploit this property but remain shallow and costly: explicit architectures must retain all activations, and curvature-induced distortions make stability difficult, leading to prohibitive memory and runtime overhead. At the dynamical level, neural activity tends to converge to stable equilibrium states, conferring robustness, stability, and energy efficiency.
Motivated by these complementary principles, we establish Hyperbolic Implicit Equilibrium (HIE), the first implicit equilibrium framework for hyperbolic networks. HIE directly solves for a fixed point and trains via implicit differentiation, requiring only a single Jacobian–vector product. This design enables models of effectively infinite depth within a constant memory footprint, while hyperbolic contraction accelerates convergence beyond Euclidean counterparts.
We further contribute Lorentz group normalization for stable equilibrium and a complete theoretical analysis of optimization, stability, and generalization. Experiments show that HIE scales hyperbolic models far beyond prior explicit designs, achieving faster and more robust convergence and revealing the unique benefits of hyperbolic geometry for implicit deep learning.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 1288
Loading