2022 article

Federated Learning via Plurality Vote

Yue, K., Jin, R., Wong, C.-W., & Dai, H. (2022, December 7). IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS.

By: K. Yue n, R. Jin*, C. Wong n & H. Dai n

author keywords: Distributed optimization; efficient communication; federated learning; neural network quantization
TL;DR: This work proposes a new scheme named federated learning via plurality vote (FedVote), which can reduce quantization error and converges faster compared to the methods directly quantizing the model updates. (via Semantic Scholar)
Source: Web Of Science
Added: January 9, 2023

Federated learning allows collaborative clients to solve a machine-learning problem while preserving data privacy. Recent studies have tackled various challenges in federated learning, but the joint optimization of communication overhead, learning reliability, and deployment efficiency is still an open problem. To this end, we propose a new scheme named federated learning via plurality vote (FedVote). In each communication round of FedVote, clients transmit binary or ternary weights to the server with low communication overhead. The model parameters are aggregated via weighted voting to enhance the resilience against Byzantine attacks. When deployed for inference, the model with binary or ternary weights is resource-friendly to edge devices. Our results demonstrate that the proposed method can reduce quantization error and converges faster compared to the methods directly quantizing the model updates.