Scheduling conditional task graphs with deep reinforcement learning

Published: 03 Nov 2023, Last Modified: 23 Dec 2023NLDL 2024EveryoneRevisionsBibTeX
Keywords: task scheduling, deep reinforcement learning
TL;DR: We design and measure a DRL-based scheduler for conditional task graphs in a heterogeneous execution environment.
Abstract: Industrial applications often depend on costly computation infrastructures. Well optimised schedulers provide cost efficient utilization of these computational resources, but they can take significant effort to implement. It can also be beneficial to split the application into a hierarchy of tasks represented as a conditional task graph. In such case, the tasks in the hierarchy are conditionally executed, depending on the output of the earlier tasks. While such conditional task graphs can save computational resources, they also add complexity to scheduling. Recently, there has been research on Deep Reinforcement Learning (DRL) based schedulers, but they mostly do not address conditional task graphs. We design a DRL based scheduler for conditional task graphs in a heterogeneous execution environment. We measure how the probabilities of a conditional task graph affects the scheduler and how these adverse effects can be mitigated. We show that our solution learns to beat traditional baseline schedulers in a fraction of an hour.
Git: https://github.com/Aalto-ESG/drl-scheduler-2024
Permission: pdf
Submission Number: 43
Loading