Geometry-grounded Representation Learning and Generative Modeling

Published: 24 Dec 2025, Last Modified: 24 Dec 2025ICLR 2026 Workshop ProposalsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Geometric deep learning, Geometry-grounded representations, Generative modeling, Structure-inducing learning, learning on manifolds
TL;DR: GRaM is a workshop about exploring generative models and representation learning from a geometric perspective.
Abstract: Real-world data often originates from physical systems that are governed by geometric and physical laws. Yet, most machine learning methods treat this data as abstract vectors, ignoring the underlying structure that could improve both performance and interpretability. Geometry provides powerful guiding principles, from group equivariance to non-Euclidean metrics, that can preserve the symmetries or the structure inherent in data. We believe those geometric tools are well-suited, and perhaps essential, for representation learning and generative modeling. We propose GRaM, a workshop centered on the principle of _grounding in geometry_, which we define as: _An approach is geometrically grounded if it respects the geometric structure of the problem domain and supports geometric reasoning_. This year, we aim to explore the relevance of geometric methods, particularly in the context of large models, focusing on the theme of _scale and simplicity_. We seek to understand when geometric grounding remains necessary, how to effectively scale geometric approaches, and when geometric constraints can be relaxed in favor of simpler alternatives.
Submission Number: 76
Loading