Neural Data Filter for Bootstrapping Stochastic Gradient DescentDownload PDF

29 Mar 2024 (modified: 21 Jul 2022)ICLR 2017 Invite to WorkshopReaders: Everyone
Abstract: Mini-batch based Stochastic Gradient Descent(SGD) has been widely used to train deep neural networks efficiently. In this paper, we design a general framework to automatically and adaptively select training data for SGD. The framework is based on neural networks and we call it \emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter} (\textbf{NDF}). In Neural Data Filter, the whole training process of the original neural network is monitored and supervised by a deep reinforcement network, which controls whether to filter some data in sequentially arrived mini-batches so as to maximize future accumulative reward (e.g., validation accuracy). The SGD process accompanied with NDF is able to use less data and converge faster while achieving comparable accuracy as the standard SGD trained on the full dataset. Our experiments show that NDF bootstraps SGD training for different neural network models including Multi Layer Perceptron Network and Recurrent Neural Network trained on various types of tasks including image classification and text understanding.
TL;DR: We propose a reinforcement learning based teacher-student framework for filtering training data to boost SGD convergence.
Conflicts: microsoft.com, ustc.edu.cn
Keywords: Reinforcement Learning, Deep learning, Optimization
20 Replies

Loading