everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
This paper investigates how non-differentiability affects three different aspects of the neural network training process. We first analyze fully connected neural networks with ReLU activations, for which we show that the rate of convergence results derived using continuously differentiable functions grossly under-estimate the actual rate of convergence. Next, we analyze the problem of $L_{1}$ regularization and show that the solutions produced by deep learning solvers are unreliable even for the $L_{1}$ penalized linear model. Finally, we analyze the edge of a stability problem, where we show that all convex non-smooth functions display unstable convergence, and provide an example of a result derived using differentiable functions which fails in the non-differentiable setting. More generally, our results suggest that accounting for the non-linearity of neural networks in the training process is essential for us to develop better algorithms, and to get a better understanding of the training process in general.