Meta-learning by the baldwin effect.Open Website

2018 (modified: 09 Nov 2022)GECCO (Companion)2018Readers: Everyone
Abstract: We show that the Baldwin effect is capable of evolving few-shot supervised and reinforcement learning mechanisms, by shaping the hyperparameters and the initial parameters of deep learning algorithms. This method rivals a recent meta-learning algorithm called MAML "Model Agnostic Meta-Learning," which uses second-order gradients instead of evolution to learn a set of reference parameters that can allow rapid adaptation to tasks sampled from a distribution. The Baldwin effect does not require gradients to be backpropagated to the reference parameters or hyperparameters, and permits effectively any number of gradient updates in the inner loop, learning strong learning dependent biases.
0 Replies

Loading