Quantization-Aware Training for Multi-Agent Reinforcement Learning

Published: 01 Jan 2024, Last Modified: 28 Apr 2025EUSIPCO 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep Learning (DL) increasingly become the preferable solution in a wide range of applications, such as robotics, requiring, however, high inference speed with minimum possible power consumption and performance degradation. Over the recent years, this fueled the interest of the academic community in lower-precision architectures, with quantization techniques increasingly gaining attention. Although quantization has been extensively studied in the literature, it remains a challenging task, especially in the case of Deep Reinforcement Learning (DRL), due to intrinsic difficulties in the training process. In this work, we focus on multi agent environments, proposing a quantization-aware training method oriented to DRL that allows one to significantly lower the bit resolution of agents without affecting the execution accuracy of the given task. More specifically, the proposed method takes into account the quantization noise during training and quantizes the agents according to their parameter distributions. As demonstrated in the experimental results, the proposed method results in lower bit resolution agents with almost equal performance as the full precision models, providing an interesting research direction for efficient D RL applications and potentially unlocking low energy consumption and lightweight capabilities during inference.
Loading