When do Convolutional Neural Networks Stop Learning?Download PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Deep Learning, Convolutional Neural Network, CNN, Epoch, Training, Data
Abstract: Convolutional Neural Networks (CNNs) is one of the most essential architectures that has shown impressive performance in computer vision tasks such as image classification, detection, and segmentation. In training phase of CNN, an arbitrary number of epochs is used to train the neural networks. In a single epoch, the entire training data---divided by batch size---are fed to the network. However, the optimal number of epochs required to train a neural network is not well established. In practice, validation data is used to identify the generalization gap. To avoid overfitting, it is recommended to stop training when the generalization gap increases. However, this is a trial and error based approach. This raises a critical question: Is it possible to estimate when neural networks stop learning based on only the training data? In this research work, we introduce the stability property of data in layers and based on this property, we predict the near optimal epoch number of a CNN. We do not use any validation data to predict the near optimal epoch number. We experiment our hypothesis on six different CNN models and on three different datasets (CIFIR 10, CIFIR 100, SVHN). We save on average 58.49\% computational time to train a CNN model. Our code is available at https://github.com/PaperUnderReviewDeepLearning/Optimization.
One-sentence Summary: When do Convolutional Neural Networks Stop Learning?
6 Replies

Loading