Abstract: In this work, we explore the problem of offline reinforcement learning for a multi-agent system. Offline reinforcement learning differs from classical online and off-policy reinforcement learning settings in that agents must learn from a fixed and finite dataset. We consider a scenario where there exists a large dataset produced by interactions between an agent an its environment. We suppose the dataset is too large to be efficiently processed by an agent with limited resources, and so we consider a multi-agent network that cooperatively learns a control policy. We present a distributed reinforcement learning algorithm based on Q-learning and an approach called offline regularization. The main result of this work shows that the proposed algorithm converges in the sense that the norm squared error is asymptotically bounded by a constant, which is determined by the number of samples in the dataset. In the simulation, we have implemented the proposed algorithm to train agents to control both a linear system and a nonlinear system, namely the well-known cartpole system. We provide simulation results showing the performance of the trained agents.
0 Replies
Loading