Three ways that non-differentiability affects neural network training

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: neural networks, gradient descent, optimization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Non-differentiability dramatically impacts the neural network training process
Abstract: This paper investigates how non-differentiability affects three different aspects of the neural network training process. We first analyze fully connected neural networks with ReLU activations, for which we show that the rate of convergence results derived using continuously differentiable functions grossly under-estimate the actual rate of convergence. Next, we analyze the problem of $L_{1}$ regularization and show that the solutions produced by deep learning solvers are unreliable even for the $L_{1}$ penalized linear model. Finally, we analyze the edge of a stability problem, where we show that all convex non-smooth functions display unstable convergence, and provide an example of a result derived using differentiable functions which fails in the non-differentiable setting. More generally, our results suggest that accounting for the non-linearity of neural networks in the training process is essential for us to develop better algorithms, and to get a better understanding of the training process in general.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6416
Loading