Actor-Twin Framework for Task Graph Scheduling

Published: 01 Apr 2025, Last Modified: 21 Apr 2025ALAEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Graph Neural Networks, Task Scheduling
TL;DR: The Actor-Twin framework leverages an actor-critic-based multi-branch GCN architecture with a twin mechanism to improve task scheduling efficiency by reducing makespan in complex and dynamic environments.
Abstract: Task graph scheduling involves efficiently assigning computational tasks to available processors while ensuring the correctness of the result. As this problem is NP-hard and not polynomial-time approximable, traditional scheduling relies on heuristics. Although these methods can be effective, they often lack efficiency and fall short in generalizing well across different graph sizes and structures. Moreover, they are incompatible with optimization techniques that rely on backpropagation, limiting their adaptability to modern gradient-based approaches. In this paper, we present a novel Actor-Twin framework that integrates Multi-Branch Graph Convolutional Networks (MB-GCNs) with an Actor-Critic approach to overcome the nondifferentiable nature of heuristic-based scheduling. The heart of our framework is the Actor-Twin Scheduler (ACTS) module, which generates a task score via the MB-GCN actor that is subsequently used by a heuristic for scheduling. To facilitate gradient-based training of the actor, we incorporate a differentiable twin component that approximates heuristic decisions. We also introduce a systematic graph representation for task-server assignments that is compatible with gradient-based optimization. Experimental results show that Actor-Twin consistently outperforms traditional heuristic scheduling approaches in both average and variance of makespan
Type Of Paper: Work-in-progress paper (max page 6)
Anonymous Submission: Anonymized submission.
Submission Number: 17
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview