Dynamic condition-based maintenance for shock systems based on damage evolutions using deep reinforcement learning
Abstract: In the industry domain, maintenance tasks and resources need to be allocated to industrial systems to avoid unplanned downtime. We explore the dynamic condition-based maintenance strategy for systems comprising multiple components, in which each component undergoes external shocks along with time and is maintained individually. For each component, random shocks arrive following a homogeneous Poisson process, and the evolution of the component’s state is characterized using a Markov process. The dynamic condition-based maintenance policy for the developed shock system, depicted as a Markov decision process, is introduced. To minimize the overall system cost, the maintenance optimization problem is discussed to determine the most cost-effective maintenance actions. A tailored advantage actor-critic algorithm in deep reinforcement learning is proposed to address the challenge of high dimensionality. Finally, numerical examples demonstrate the efficiency of the proposed method in searching for optimal maintenance actions and reducing maintenance costs.
External IDs:dblp:journals/ress/SunY25
Loading