RandomOut: Using a convolutional gradient norm to win The Filter LotteryDownload PDF

29 Mar 2024 (modified: 18 Feb 2016)ICLR 2016 workshop submissionReaders: Everyone
Abstract: Convolutional neural networks are sensitive to the random initialization of filters. We call this The Filter Lottery (TFL) because the random numbers used to initialize the network determine if you will ``win'' and converge to a satisfactory local minimum. This issue forces networks to contain more filters (be wider) to achieve higher accuracy because they have better odds of being transformed into highly discriminative features at the risk of introducing redundant features. To deal with this, we propose to evaluate and replace specific convolutional filters that have little impact on the prediction. We use the gradient norm to evaluate the impact of a filter on error, and re-initialize filters when the gradient norm of its weights falls below a specific threshold. This consistently improves accuracy across two datasets by up to 1.8%. Our scheme RandomOut allows us to increase the number of filters explored without increasing the size of the network. This yields more compact networks which can train and predict with less computation, thus allowing more powerful CNNs to run on mobile devices.
Conflicts: cs.umass.edu, umb.edu
2 Replies

Loading