Abstract: This paper presents a novel data-driven approach for approximating the $\varepsilon$-Nash equilibrium in continuous-time linear quadratic Gaussian (LQG) games, where multiple agents interact with each other through their dynamics and infinite horizon discounted costs. The core of our method involves solving two algebraic Riccati equations (AREs) and an ordinary differential equation (ODE) using state and input samples collected from agents, eliminating the need for a priori knowledge of their dynamical models. The standard ARE is addressed through an integral reinforcement learning (IRL) technique, while the nonsymmetric ARE and the ODE are resolved by identifying the drift coefficients of the agents' dynamics under general conditions. Moreover, by imposing specific conditions on models, we extend the IRL-based approach to approximately solve the nonsymmetric ARE. Numerical examples are given to demonstrate the effectiveness of the proposed algorithms.
Loading