Abstract: In this paper, we present an output feedback optimal adaptive regulator for an uncertain linear discrete-time system with intermittent feedback transmission and control execution. In particular, we propose a Q-learning technique to design a finite horizon optimal control with event-based availability of the observed state vectors. The observer adaptively estimates the system dynamics along with the states. The action dependent value function or Q-function parameters are estimated at the controller using the observer states transmitted intermittently to the controller. Aperiodic update laws are proposed to estimate the action dependent value function or Q-function parameters that satisfy the terminal condition. An event-triggering condition is derived to orchestrate the events, i.e., the time instants of feedback transmission and control execution, such that the closed-loop event-triggered system is ultimately bounded in the finite time horizon. It is also shown that the closed-loop system parameters converge asymptotically to zero when the time horizon extends to infinity. With the proposed method, the optimal control is learned in a forward-in-time and online manner without any knowledge of system dynamics. Finally, the analytical design is validated by using a numerical example via simulation.
0 Replies
Loading