Reinforcement Learning-Based Fault-Tolerant Control of Uncertain Strict-Feedback Nonlinear Systems With Intermittent Actuator Faults

Published: 2025, Last Modified: 08 Jan 2026IEEE Trans. Neural Networks Learn. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this work, a novel reinforcement learning-based adaptive fault-tolerant control (FTC) scheme with actuator redundancy is presented for a nonlinear strict-feedback system with nonlinear dynamics and uncertainties. A learning-based switching function technique is established to steer different groups of actuators automatically and successively to mitigate the impact of faulty actuators by observing a switching performance index. The optimal tracking control problem (OTCP) of strict-feedback nonlinear systems is transformed into an equivalent optimal regulation problem of each affine subsystem via adaptive feedforward controllers. Subsequently, the designed objective functions associated with Hamilton–Jacobi–Bellman (HJB) estimate errors caused by neural network (NN) approximations can be minimized by the reinforcement learning algorithm without value or policy iterations. It is proved that the tracking objective can be achieved and all signals in the closed-loop system can be guaranteed to be bounded, as long as the minimum time interval between two successive failures is bounded. Theoretical results are verified by simulations.
Loading