2024 journal article
GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for DAG Task Scheduling Over Dynamic Vehicular Clouds
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 21(4), 4226–4242.
Vehicular Clouds (VCs) are modern platforms for processing of computation-intensive tasks over vehicles. Such tasks are often represented as Directed Acyclic Graphs (DAGs) consisting of interdependent vertices/subtasks and directed edges. However, efficient scheduling of DAG tasks over VCs presents significant challenges, mainly due to the dynamic service provisioning of vehicles within VCs and non-Euclidean representation of DAG tasks' topologies. In this paper, we propose a Graph neural network-Augmented Deep Reinforcement Learning scheme (GA-DRL) for the timely scheduling of DAG tasks over dynamic VCs. In doing so, we first model the VC-assisted DAG task scheduling as a Markov decision process. We then adopt a multi-head Graph ATtention network (GAT) to extract the features of DAG subtasks. Our developed GAT enables a two-way aggregation of the topological information in a DAG task by simultaneously considering predecessors and successors of each subtask. We further introduce non-uniform DAG neighborhood sampling through codifying the scheduling priority of different subtasks, which makes our developed GAT generalizable to completely unseen DAG task topologies. Finally, we augment GAT into a double deep Q-network learning module to conduct subtask-to-vehicle assignment according to the extracted features of subtasks, while considering the dynamics and heterogeneity of the vehicles in VCs. Through simulating various DAG tasks under real-world movement traces of vehicles, we demonstrate that GA-DRL outperforms existing benchmarks in terms of DAG task completion time.