Handling Delay in Reinforcement Learning Caused by Parallel Computations of Neurons

Published: 19 Jun 2024, Last Modified: 02 Aug 2024ARLET 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, delay, parallel computations
TL;DR: We suggest executing all neurons in parallel, which speeds up inference time, and propose methods to effectively handle the associated delays.
Abstract: Biological neural networks operate in parallel, a feature that sets them apart from artificial neural networks and can significantly enhance inference speed. However, this parallelism introduces challenges: when each neuron operates asynchronously with a fixed execution time, an $N$-layer feed-forward neural network without skip connections experiences a delay of $N$ time-steps. While reducing the number of layers can decrease this delay, it also diminishes the network's expressivity. In this work, we investigate the balance between delay and expressivity in neural networks. In particular, we study different types of skip connections, such as residual connections, projections from every hidden representation to the action space, and projections from the observation to every hidden representation. We evaluate different architectures and show that those with skip connections exhibit strong performance across different neuron execution times, common reinforcement learning algorithms, and various environments, including four Mujoco environments and all MinAtar games. Additionally, we demonstrate that parallel execution of neurons can accelerate inference on standard modern hardware by 6-350\%.
Submission Number: 116
Loading