Abstract: Many applications---including power systems, robotics, and economics---involve a dynamical system interacting with a stochastic and hard-to-model environment. We adopt a reinforcement learning approach to control such systems. Specifically, we consider a deterministic, discrete-time, linear, time-invariant dynamical system coupled with a feature-based linear Markov process with an unknown transition kernel. The objective is to learn a control policy that minimizes a quadratic cost over the system state, the Markov process, and the control input. Leveraging both components of the system, we derive an explicit parametric form for the optimal state-action value function and the corresponding optimal policy. Our model is distinct in combining aspects of both classical Linear Quadratic Regulator (LQR) and linear Markov decision process (MDP) frameworks. This combination retains the implementation simplicity of LQR, while allowing for sophisticated stochastic modeling afforded by linear MDPs, without estimating the transition probabilities, thereby enabling direct policy improvement. For the nominal setting, where the linear system dynamics are known, we use tools from control theory to provide theoretical guarantees on the stability of the system under the learned policy and provide a sample complexity analysis for its convergence to the optimal policy. We further extend our framework to systems with Gaussian process noise and to systems with unknown linear dynamics. We illustrate our results via numerical examples for the nominal, noisy, and unknown dynamics settings to demonstrate the effectiveness of our approach in learning the optimal control policy under partially known stochastic dynamics.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Arnob_Ghosh3
Submission Number: 8155
Loading