TL;DR: We discuss the study of function spaces parametrized by deep learning models via algebraic geometry.
Abstract: In this position paper, we promote the study of function spaces parameterized by machine learning models through the lens of algebraic geometry. To this end, we focus on algebraic models, such as neural networks with polynomial activations, whose associated function spaces are semi-algebraic varieties. We outline a dictionary between algebro-geometric invariants of these varieties, such as dimension, degree, and singularities, and fundamental aspects of machine learning, such as sample complexity, expressivity, training dynamics, and implicit bias. Along the way, we review the literature and discuss ideas beyond the algebraic domain. This work lays the foundations of a research direction bridging algebraic geometry and deep learning, that we refer to as neuroalgebraic geometry.
Lay Summary: How do neural networks learn? Why do some neural architectures perform better than others? Despite the success of AI, these questions remain largely mysterious.
We explore a way to understand the inner-workings of neural networks, by using tools from a branch of mathematics called algebraic geometry. Specifically, we look at the space of functions that these models can represent, and discuss how to understand its geometry via tools from algebra. We refer to this field as neuroalgebraic geometry.
We believe that neuroalgebraic geometry can offer unique insights, complementing the other mathematical fields that have been proposed to crack the fundamental questions in the understanding of neural networks.
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: neuromanifolds, algebraic geometry, singular learning theory, polynomial activation functions, loss landscape
Submission Number: 90
Loading