Uncovering Critical Sets of Deep Neural Networks via Sample-Independent Critical Lifting

11 May 2025 (modified: 29 Oct 2025)Submitted to NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep Learning, Machine Learning, Neural Network Theory, Loss Landscape Analysis, Embedding Principle, Critical Lifting Operator
TL;DR: We propose sample-independent critical lifting operator and investigate the sample dependence of output-preserving critical points
Abstract: This paper investigates the sample dependence of critical points for neural networks. We introduce a sample-independent critical lifting operator that associates a parameter of one network with a set of parameters of another, thus defining sample-dependent and sample-independent lifted critical points. We then show by example that previously studied critical embeddings do not capture all sample-independent lifted critical points. Finally, we demonstrate the existence of sample-dependent lifted critical points for sufficiently large sample sizes and prove that saddles appear among them.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 21861
Loading