Non trust detection of decentralized federated learning based on historical gradient

Published: 01 Jan 2023, Last Modified: 15 May 2025Eng. Appl. Artif. Intell. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As a paradigm of distributed machine learning, federated learning is widely used in various real scenarios due to its excellent privacy protection performance on preventing local data from being disclosed. However, the traditional federated learning has the defect that a third-party server aggregates the models of various users since it’s difficult to guarantee the reliability of the third party, and multicentre phenomena frequently appeared in various applications, such as social networks, banking and finance, medical health, etc. Users can’t be reassured in decentralization setting due to the mixture of malicious and untrustworthy ones among them. Although untrustworthy users are benign, they may be classified as the saboteurs because of poor efficiency performance in decentralized federated learning which is caused by missing or ambiguity of data. In this paper, we propose Decentralized Federated Learning Historical Gradient (DFedHG) approach to distinguish normal users, untrustworthy users and malicious users in the decentralized federated learning setting. Simultaneously, by means of DFedHG, malicious users are sub-divided into targetless attacks and targeted attacks, which is verified by adopting two types of data sets for confirmation. The experimental results show that the proposed approach achieves better performance compared with the conventional decentralized federated learning without untrustworthy users, and further present excellent differentiation of malicious users.
Loading