Poisoning Attack on Federated Knowledge Graph Embedding

Published: 23 Jan 2024, Last Modified: 23 May 2024TheWebConf24 OralEveryoneRevisionsBibTeX
Keywords: Knowledge Graph Embedding, Federated Learning, Poisoning Attack
Abstract: Federated Knowledge Graph Embedding (FKGE) is an emerging collaborative learning technique for deriving expressive representations (i.e., embeddings) from client-maintained distributed knowledge graphs (KGs). However, poisoning attacks in FKGE, which lead to biased decisions by downstream applications, remain unexplored. This paper is the first work to systematise the risks of FKGE poisoning attacks, from which we develop a novel framework for poisoning attacks that force the victim client to predict specific false facts. The challenge is that FKGE maintains KGs for training locally on clients, preventing attackers in centralized KGEs from injecting poisoned data directly into the victim's training data. Thus, an attacker needs to create poisoned data without the victim's local KG, and inject the poisoned data indirectly into the victim's embeddings via FKGE aggregation. Specifically, to create poisoned data, the attacker first infers the targeted relations in the victim's local KG via a new KG component inference attack. Then, to accurately mislead the victim's embeddings via aggregation, the attacker locally trains a shadow model using the poisoned data and uses an optimised dynamic poisoning scheme to adjust the model and generate progressive poisoned updates. Our experimental results demonstrate the attack's effectiveness, achieving a remarkable success rate on various KGE models (e.g. 100\% on TransE with WNRR), while keeping the original task's performance nearly unchanged.
Track: Semantics and Knowledge
Submission Guidelines Scope: Yes
Submission Guidelines Blind: Yes
Submission Guidelines Format: Yes
Submission Guidelines Limit: Yes
Submission Guidelines Authorship: Yes
Student Author: Yes
Submission Number: 620
Loading