2023 journal article

Privacy-Preserving Serverless Edge Learning With Decentralized Small-Scale Mobile Data

IEEE NETWORK, 38(2), 264–271.

By: S. Lin n, C. Lin n & M. Lee*

author keywords: Training; Next generation networking; Task analysis; Computational efficiency; Computational modeling; Artificial intelligence; Federated learning; 6G mobile communication
Source: Web Of Science
Added: May 28, 2024

In next-generation (i.e., 6G) networking systems, the data-driven approach will play an essential role, being an efficient tool for networking system management and bringing popular user applications. With those unprecedented and novel usages, existing frameworks fail to consider the complex nature of the next-generation networking system and consequently fail to be applied to future communication systems directly. Moreover, existing frameworks also fail to support popular privacy-preserving learning strategies efficiently by presenting special designs to respond to the resource-demanding nature of the aforementioned strategies. To fill this gap, this paper extends conventional serverless platforms with serverless edge learning architectures, providing a mature and efficient distributed training framework by fully exploiting limited wireless communication and edge computation resources in the considered networking system with the following three features. Firstly, this framework dynamically orchestrates resources among heterogeneous physical units to efficiently fulfill privacy-preserving learning objectives. The design jointly considers learning task requests and underlying infrastructure heterogeneity, including last-mile transmissions, computation abilities of edge and cloud computing centers, and loading status of infrastructure. Secondly, the proposed framework can easily work with data-driven approaches to improve network management efficiency, realizing AI for network promise of next-generation networking systems to provide efficient network automation. Lastly, to significantly reduce distributed training overheads, small-scale data training is proposed by integrating with a general, simple data classifier. This low-load enhancement can seamlessly work with various distributed deep models in the proposed framework to improve communications and computation efficiencies during the training phase. Based on the above innovations, open challenges, and future research directions encourage the research community to develop efficient privacy-preserving learning techniques.