Generalisable Agents for Neural Network Optimisation

Published: 28 Oct 2023, Last Modified: 23 Nov 2023WANT@NeurIPS 2023 PosterEveryoneRevisionsBibTeX
Keywords: learned optimisation, learned optimiser, learning rate schedules, optimisation, deep learning, reinforcement learning
TL;DR: We train RL agents (an agent per layer) on simple supervised settings to learn to improve optimisation by taking actions, like adapting the learning rate, and then generalise to more resource-intensive supervised learning tasks.
Abstract: Optimising deep neural networks is a challenging task due to complex training dynamics, high computational requirements, and long training times. To address this difficulty, we propose the framework of Generalisable Agents for Neural Network Optimisation (GANNO)---a multi-agent reinforcement learning (MARL) approach that learns to improve neural network optimisation by dynamically and responsively scheduling hyperparameters during training. GANNO utilises an agent per layer that observes localised network dynamics and accordingly takes actions to adjust these dynamics at a layerwise level to collectively improve global performance. In this paper, we use GANNO to control the layerwise learning rate and show that the framework can yield useful and responsive schedules that are competitive with handcrafted heuristics. Furthermore, GANNO is shown to perform robustly across a wide variety of unseen initial conditions, and can successfully generalise to harder problems than it was trained on. Our work presents an overview of the opportunities that this paradigm offers for training neural networks, along with key challenges that remain to be overcome.
Submission Number: 15
Loading