2023 journal article

Multi-Job Intelligent Scheduling With Cross-Device Federated Learning

IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 34(2), 535–551.

By: J. Liu*, J. Jia*, B. Ma*, C. Zhou*, J. Zhou*, Y. Zhou*, H. Dai n, D. Dou*

co-author countries: China 🇨🇳 United States of America 🇺🇸
author keywords: Federated learning; scheduling; multi-job; parallel execution; distributed learning
Source: Web Of Science
Added: January 23, 2023

Recent years have witnessed a large amount of decentralized data in various (edge) devices of end-users, while the decentralized data aggregation remains complicated for machine learning jobs because of regulations and laws. As a practical approach to handling decentralized data, Federated Learning (FL) enables collaborative global machine learning model training without sharing sensitive raw data. The servers schedule devices to jobs within the training process of FL. In contrast, device scheduling with multiple jobs in FL remains a critical and open problem. In this article, we propose a novel multi-job FL framework, which enables the training process of multiple jobs in parallel. The multi-job FL framework is composed of a system model and a scheduling method. The system model enables a parallel training process of multiple jobs, with a cost model based on the data fairness and the training time of diverse devices during the parallel training process. We propose a novel intelligent scheduling approach based on multiple scheduling methods, including an original reinforcement learning-based scheduling method and an original Bayesian optimization-based scheduling method, which corresponds to a small cost while scheduling devices to multiple jobs. We conduct extensive experimentation with diverse jobs and datasets. The experimental results reveal that our proposed approaches significantly outperform baseline approaches in terms of training time (up to 12.73 times faster) and accuracy (up to 46.4% higher).