A Survey of Optimization Methods for Training DL Models: Theoretical Perspective on Convergence and Generalization
Abstract: As data sets grow in size and complexity, it is becoming more difficult to pull useful features from them using hand-crafted feature extractors. For this reason, deep learning (DL) frameworks are now widely popular. DL frameworks process input data using multi-layer networks. Importantly, DL approaches, as opposed to traditional machine learning (ML) methods, automatically find high-quality representation of complex data useful for a particular learning task. The Holy Grail of DL and one of the most mysterious challenges in all of modern ML is to develop a fundamental understanding of DL optimization and generalization. While numerous optimization techniques have been introduced in the literature to navigate the exploration of the highly non-convex DL optimization landscape, many survey papers reviewing them primarily focus on summarizing these methodologies, often overlooking the critical theoretical analyses of these methods. In this paper, we provide an extensive summary of the theoretical foundations of optimization methods in DL, including presenting various methodologies, their convergence analyses, and generalization abilities. This paper not only includes theoretical analysis of popular generic gradient-based first-order and second-order methods, but it also covers the analysis of the optimization techniques adapting to the properties of the DL loss landscape and explicitly encouraging the discovery of well-generalizing optimal points. Additionally, we extend our discussion to distributed optimization methods that facilitate parallel computations, including both centralized and decentralized approaches. We provide both convex and non-convex analysis for the optimization algorithms considered in this survey paper. Finally, this paper aims to serve as a comprehensive theoretical handbook on optimization methods for DL, offering insights and understanding to both novice and seasoned researchers in the field.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: The paper is updated regarding to the reviewer's suggestions.
1. We now added Table 1&2 to summarize the convergence results for the first and second-order optimization methods and distributed (centralized and decentralized) optimization methods, and Table 3 to summarize the generalization results for the first-order methods and landscape-aware optimization methods. You can reach the tables in the introduction section of the updated revision. The tables guidance as to the difference between the methods and comments regarding their practical performance.
2. We address specific comments of Reviewer HTdu and incorporate specific changes into our updated revision. All editions are marked with blue. You could reach the rebuttal for detailed editions.
Assigned Action Editor: ~Konstantin_Mishchenko1
Submission Number: 3077
Loading