Optimizing Neural Networks with Gradient Lexicase SelectionDownload PDF

Anonymous

Sep 29, 2021 (edited Oct 05, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: deep learning, lexicase selection, optimization, evolutionary algorithms
  • Abstract: Lexicase selection is a parent selection method for which the selection is based on assessing individual cases in random order, instead of using aggregated metrics such as loss and accuracy. While lexicase selection and its variants have demonstrated success in various fields including genetic programming, genetic algorithms, and more recently in symbolic regression and robotics, its applications in other forms of modern machine learning have not yet been explored. In this paper, we aim to investigate the effectiveness of lexicase selection in training deep neural networks and illustrate how its general idea may fit into the context of modern machine learning at scale. We propose Gradient Lexicase Selection, an optimization method that combines stochastic gradient descent and lexicase selection in an evolutionary fashion. Experimental results show that the proposed method improves the generalization performance of popular deep neural network architectures on benchmark image classification datasets, and helps models learn more diverse representations.
  • One-sentence Summary: We propose Gradient Lexicase Selection, an evolutionary optimization method that improves the generalization of deep neural networks.
  • Supplementary Material: zip
0 Replies

Loading