Poincaré Gradient Descent: A Projection-Based Riemannian Method on the Poincaré Ball
Keywords: Riemannian optimization, Poincaré disk, gradient descent, hyperbolic geometry, geodesic convexity, hyperbolic learning
TL;DR: We propose Poincaré Gradient Descent (PGD), a fast optimizer on the Poincaré ball using a projection-based retraction that preserves geometry, matches Riemannian GD convergence, and reduces computation.
Abstract: We propose Poincaré Gradient Descent (PGD), a first-order optimization method on the Poincaré ball. PGD replaces the exponential map with a projection-based retraction that is first-order equivalent while preserving geodesic structure, thereby reducing per-iteration computational cost. We establish convergence guarantees for smooth geodesically convex and strongly convex objectives, showing that PGD matches the iteration complexity of Riemannian Gradient Descent (RGD). A Möbius isometric initialization and step-size equivalence analysis further ensure stable and efficient updates. Numerical experiments confirm that PGD achieves the same theoretical convergence rate as RGD but runs significantly faster in practice, offering a simple and effective framework for scalable optimization in hyperbolic space.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 7
Loading