deeprobust.graph.global_attack package¶
Submodules¶
deeprobust.graph.global_attack.base_attack module¶
-
class
BaseAttack
(model, nnodes, attack_structure=True, attack_features=False, device='cpu')[source]¶ Abstract base class for target attack classes.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
-
attack
(ori_adj, n_perturbations, **kwargs)[source]¶ Generate attacks on the input graph.
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
n_perturbations (int) – Number of edge removals/additions.
- Returns
- Return type
None.
-
check_adj_tensor
(adj)[source]¶ Check if the modified adjacency is symmetric, unweighted, all-zero diagonal.
deeprobust.graph.global_attack.dice module¶
-
class
DICE
(model=None, nnodes=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ As is described in ADVERSARIAL ATTACKS ON GRAPH NEURAL NETWORKS VIA META LEARNING (ICLR’19), ‘DICE (delete internally, connect externally) is a baseline where, for each perturbation, we randomly choose whether to insert or remove an edge. Edges are only removed between nodes from the same classes, and only inserted between nodes from different classes.
- Parameters
model – model to attack. Default None.
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.global_attack import DICE >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> model = DICE() >>> model.attack(adj, labels, n_perturbations=10) >>> modified_adj = model.modified_adj
-
attack
(ori_adj, labels, n_perturbations, **kwargs)[source]¶ Delete internally, connect externally. This baseline has all true class labels (train and test) available.
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
labels – node labels
n_perturbations (int) – Number of edge removals/additions.
- Returns
- Return type
None.
deeprobust.graph.global_attack.mettack module¶
- Adversarial Attacks on Graph Neural Networks via Meta Learning. ICLR 2019
- Author Tensorflow implementation:
-
class
BaseMeta
(model=None, nnodes=None, feature_shape=None, lambda_=0.5, attack_structure=True, attack_features=False, device='cpu')[source]¶ Abstract base class for meta attack. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR 2019, https://openreview.net/pdf?id=Bylnx209YX
- Parameters
model – model to attack. Default None.
nnodes (int) – number of nodes in the input graph
lambda_ (float) – lambda_ is used to weight the two objectives in Eq. (10) in the paper.
feature_shape (tuple) – shape of the input node features
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
-
attack
(adj, labels, n_perturbations)[source]¶ Generate attacks on the input graph.
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
n_perturbations (int) – Number of edge removals/additions.
- Returns
- Return type
None.
-
class
MetaApprox
(model, nnodes, feature_shape=None, attack_structure=True, attack_features=False, device='cpu', with_bias=False, lambda_=0.5, train_iters=100, lr=0.01)[source]¶ Approximated version of Meta Attack. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR 2019.
Examples
>>> import numpy as np >>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.global_attack import MetaApprox >>> from deeprobust.graph.utils import preprocess >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> adj, features, labels = preprocess(adj, features, labels, preprocess_adj=False) # conver to tensor >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> idx_unlabeled = np.union1d(idx_val, idx_test) >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> model = MetaApprox(surrogate, nnodes=adj.shape[0], feature_shape=features.shape, attack_structure=True, attack_features=False, device='cpu', lambda_=0).to('cpu') >>> # Attack >>> model.attack(features, adj, labels, idx_train, idx_unlabeled, n_perturbations=10, ll_constraint=True) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, idx_unlabeled, n_perturbations, ll_constraint=True, ll_cutoff=0.004)[source]¶ Generate n_perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
idx_unlabeled – unlabeled nodes indices
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
ll_constraint (bool) – whether to exert the likelihood ratio test constraint
ll_cutoff (float) – The critical value for the likelihood ratio test of the power law distributions. See the Chi square distribution with one degree of freedom. Default value 0.004 corresponds to a p-value of roughly 0.95. It would be ignored if ll_constraint is False.
-
-
class
Metattack
(model, nnodes, feature_shape=None, attack_structure=True, attack_features=False, device='cpu', with_bias=False, lambda_=0.5, train_iters=100, lr=0.1, momentum=0.9)[source]¶ Meta attack. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR 2019.
Examples
>>> import numpy as np >>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.global_attack import Metattack >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> idx_unlabeled = np.union1d(idx_val, idx_test) >>> idx_unlabeled = np.union1d(idx_val, idx_test) >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> model = Metattack(surrogate, nnodes=adj.shape[0], feature_shape=features.shape, attack_structure=True, attack_features=False, device='cpu', lambda_=0).to('cpu') >>> # Attack >>> model.attack(features, adj, labels, idx_train, idx_unlabeled, n_perturbations=10, ll_constraint=False) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, idx_unlabeled, n_perturbations, ll_constraint=True, ll_cutoff=0.004)[source]¶ Generate n_perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
idx_unlabeled – unlabeled nodes indices
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
ll_constraint (bool) – whether to exert the likelihood ratio test constraint
ll_cutoff (float) – The critical value for the likelihood ratio test of the power law distributions. See the Chi square distribution with one degree of freedom. Default value 0.004 corresponds to a p-value of roughly 0.95. It would be ignored if ll_constraint is False.
-
deeprobust.graph.global_attack.nipa module¶
Non-target-specific Node Injection Attacks on Graph Neural Networks: A Hierarchical Reinforcement Learning Approach. WWW 2020. https://faculty.ist.psu.edu/vhonavar/Papers/www20.pdf
Still on testing stage. Haven’t reproduced the performance yet.
-
class
NIPA
(env, features, labels, idx_train, idx_val, idx_test, list_action_space, ratio, reward_type='binary', batch_size=30, num_wrong=0, bilin_q=1, embed_dim=64, gm='mean_field', mlp_hidden=64, max_lv=1, save_dir='checkpoint_dqn', device=None)[source]¶ Reinforcement learning agent for NIPA attack. https://faculty.ist.psu.edu/vhonavar/Papers/www20.pdf
- Parameters
env – Node attack environment
features – node features matrix
labels – labels
idx_meta – node meta indices
idx_test – node test indices
list_action_space (list) – list of action space
num_mod – number of modification (perturbation) on the graph
reward_type (str) – type of reward (e.g., ‘binary’)
batch_size – batch size for training DQN
save_dir – saving directory for model checkpoints
device (str) – ‘cpu’ or ‘cuda’
Examples
See more details in https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_nipa.py
deeprobust.graph.global_attack.random_attack module¶
-
class
Random
(model=None, nnodes=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ Randomly adding edges to the input graph
- Parameters
model – model to attack. Default None.
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.global_attack import Random >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> model = Random() >>> model.attack(adj, n_perturbations=10) >>> modified_adj = model.modified_adj
-
attack
(ori_adj, n_perturbations, type='add', **kwargs)[source]¶ Generate attacks on the input graph.
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
n_perturbations (int) – Number of edge removals/additions.
type (str) – perturbation type. Could be ‘add’, ‘remove’ or ‘flip’.
- Returns
- Return type
None.
-
inject_nodes
(adj, n_add, n_perturbations)[source]¶ For each added node, randomly connect with other nodes.
-
perturb_adj
(adj, n_perturbations, type='add')[source]¶ Randomly add, remove or flip edges.
- Parameters
adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
n_perturbations (int) – Number of edge removals/additions.
type (str) – perturbation type. Could be ‘add’, ‘remove’ or ‘flip’.
- Returns
perturbed adjacency matrix
- Return type
scipy.sparse matrix
deeprobust.graph.global_attack.topology_attack module¶
- Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
- Tensorflow Implementation:
-
class
MinMax
(model=None, nnodes=None, loss_type='CE', feature_shape=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ MinMax attack for graph data.
- Parameters
model – model to attack. Default None.
nnodes (int) – number of nodes in the input graph
loss_type (str) – attack loss type, chosen from [‘CE’, ‘CW’]
feature_shape (tuple) – shape of the input node features
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.global_attack import MinMax >>> from deeprobust.graph.utils import preprocess >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> adj, features, labels = preprocess(adj, features, labels, preprocess_adj=False) # conver to tensor >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Victim Model >>> victim_model = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0.5, weight_decay=5e-4, device='cpu').to('cpu') >>> victim_model.fit(features, adj, labels, idx_train) >>> # Setup Attack Model >>> model = MinMax(model=victim_model, nnodes=adj.shape[0], loss_type='CE', device='cpu').to('cpu') >>> model.attack(features, adj, labels, idx_train, n_perturbations=10) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, n_perturbations, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
epochs – number of training epochs
-
class
PGDAttack
(model=None, nnodes=None, loss_type='CE', feature_shape=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ PGD attack for graph data.
- Parameters
model – model to attack. Default None.
nnodes (int) – number of nodes in the input graph
loss_type (str) – attack loss type, chosen from [‘CE’, ‘CW’]
feature_shape (tuple) – shape of the input node features
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.global_attack import PGDAttack >>> from deeprobust.graph.utils import preprocess >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> adj, features, labels = preprocess(adj, features, labels, preprocess_adj=False) # conver to tensor >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Victim Model >>> victim_model = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0.5, weight_decay=5e-4, device='cpu').to('cpu') >>> victim_model.fit(features, adj, labels, idx_train) >>> # Setup Attack Model >>> model = PGDAttack(model=victim_model, nnodes=adj.shape[0], loss_type='CE', device='cpu').to('cpu') >>> model.attack(features, adj, labels, idx_train, n_perturbations=10) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, n_perturbations, epochs=200, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
epochs – number of training epochs
Module contents¶
-
class
BaseAttack
(model, nnodes, attack_structure=True, attack_features=False, device='cpu')[source]¶ Abstract base class for target attack classes.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
-
attack
(ori_adj, n_perturbations, **kwargs)[source]¶ Generate attacks on the input graph.
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
n_perturbations (int) – Number of edge removals/additions.
- Returns
- Return type
None.
-
check_adj_tensor
(adj)[source]¶ Check if the modified adjacency is symmetric, unweighted, all-zero diagonal.
-
class
DICE
(model=None, nnodes=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ As is described in ADVERSARIAL ATTACKS ON GRAPH NEURAL NETWORKS VIA META LEARNING (ICLR’19), ‘DICE (delete internally, connect externally) is a baseline where, for each perturbation, we randomly choose whether to insert or remove an edge. Edges are only removed between nodes from the same classes, and only inserted between nodes from different classes.
- Parameters
model – model to attack. Default None.
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.global_attack import DICE >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> model = DICE() >>> model.attack(adj, labels, n_perturbations=10) >>> modified_adj = model.modified_adj
-
attack
(ori_adj, labels, n_perturbations, **kwargs)[source]¶ Delete internally, connect externally. This baseline has all true class labels (train and test) available.
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
labels – node labels
n_perturbations (int) – Number of edge removals/additions.
- Returns
- Return type
None.
-
class
MetaApprox
(model, nnodes, feature_shape=None, attack_structure=True, attack_features=False, device='cpu', with_bias=False, lambda_=0.5, train_iters=100, lr=0.01)[source]¶ Approximated version of Meta Attack. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR 2019.
Examples
>>> import numpy as np >>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.global_attack import MetaApprox >>> from deeprobust.graph.utils import preprocess >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> adj, features, labels = preprocess(adj, features, labels, preprocess_adj=False) # conver to tensor >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> idx_unlabeled = np.union1d(idx_val, idx_test) >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> model = MetaApprox(surrogate, nnodes=adj.shape[0], feature_shape=features.shape, attack_structure=True, attack_features=False, device='cpu', lambda_=0).to('cpu') >>> # Attack >>> model.attack(features, adj, labels, idx_train, idx_unlabeled, n_perturbations=10, ll_constraint=True) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, idx_unlabeled, n_perturbations, ll_constraint=True, ll_cutoff=0.004)[source]¶ Generate n_perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
idx_unlabeled – unlabeled nodes indices
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
ll_constraint (bool) – whether to exert the likelihood ratio test constraint
ll_cutoff (float) – The critical value for the likelihood ratio test of the power law distributions. See the Chi square distribution with one degree of freedom. Default value 0.004 corresponds to a p-value of roughly 0.95. It would be ignored if ll_constraint is False.
-
-
class
Metattack
(model, nnodes, feature_shape=None, attack_structure=True, attack_features=False, device='cpu', with_bias=False, lambda_=0.5, train_iters=100, lr=0.1, momentum=0.9)[source]¶ Meta attack. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR 2019.
Examples
>>> import numpy as np >>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.global_attack import Metattack >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> idx_unlabeled = np.union1d(idx_val, idx_test) >>> idx_unlabeled = np.union1d(idx_val, idx_test) >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> model = Metattack(surrogate, nnodes=adj.shape[0], feature_shape=features.shape, attack_structure=True, attack_features=False, device='cpu', lambda_=0).to('cpu') >>> # Attack >>> model.attack(features, adj, labels, idx_train, idx_unlabeled, n_perturbations=10, ll_constraint=False) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, idx_unlabeled, n_perturbations, ll_constraint=True, ll_cutoff=0.004)[source]¶ Generate n_perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
idx_unlabeled – unlabeled nodes indices
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
ll_constraint (bool) – whether to exert the likelihood ratio test constraint
ll_cutoff (float) – The critical value for the likelihood ratio test of the power law distributions. See the Chi square distribution with one degree of freedom. Default value 0.004 corresponds to a p-value of roughly 0.95. It would be ignored if ll_constraint is False.
-
-
class
Random
(model=None, nnodes=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ Randomly adding edges to the input graph
- Parameters
model – model to attack. Default None.
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.global_attack import Random >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> model = Random() >>> model.attack(adj, n_perturbations=10) >>> modified_adj = model.modified_adj
-
attack
(ori_adj, n_perturbations, type='add', **kwargs)[source]¶ Generate attacks on the input graph.
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
n_perturbations (int) – Number of edge removals/additions.
type (str) – perturbation type. Could be ‘add’, ‘remove’ or ‘flip’.
- Returns
- Return type
None.
-
inject_nodes
(adj, n_add, n_perturbations)[source]¶ For each added node, randomly connect with other nodes.
-
perturb_adj
(adj, n_perturbations, type='add')[source]¶ Randomly add, remove or flip edges.
- Parameters
adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
n_perturbations (int) – Number of edge removals/additions.
type (str) – perturbation type. Could be ‘add’, ‘remove’ or ‘flip’.
- Returns
perturbed adjacency matrix
- Return type
scipy.sparse matrix
-
class
MinMax
(model=None, nnodes=None, loss_type='CE', feature_shape=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ MinMax attack for graph data.
- Parameters
model – model to attack. Default None.
nnodes (int) – number of nodes in the input graph
loss_type (str) – attack loss type, chosen from [‘CE’, ‘CW’]
feature_shape (tuple) – shape of the input node features
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.global_attack import MinMax >>> from deeprobust.graph.utils import preprocess >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> adj, features, labels = preprocess(adj, features, labels, preprocess_adj=False) # conver to tensor >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Victim Model >>> victim_model = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0.5, weight_decay=5e-4, device='cpu').to('cpu') >>> victim_model.fit(features, adj, labels, idx_train) >>> # Setup Attack Model >>> model = MinMax(model=victim_model, nnodes=adj.shape[0], loss_type='CE', device='cpu').to('cpu') >>> model.attack(features, adj, labels, idx_train, n_perturbations=10) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, n_perturbations, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
epochs – number of training epochs
-
class
PGDAttack
(model=None, nnodes=None, loss_type='CE', feature_shape=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ PGD attack for graph data.
- Parameters
model – model to attack. Default None.
nnodes (int) – number of nodes in the input graph
loss_type (str) – attack loss type, chosen from [‘CE’, ‘CW’]
feature_shape (tuple) – shape of the input node features
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.global_attack import PGDAttack >>> from deeprobust.graph.utils import preprocess >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> adj, features, labels = preprocess(adj, features, labels, preprocess_adj=False) # conver to tensor >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Victim Model >>> victim_model = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0.5, weight_decay=5e-4, device='cpu').to('cpu') >>> victim_model.fit(features, adj, labels, idx_train) >>> # Setup Attack Model >>> model = PGDAttack(model=victim_model, nnodes=adj.shape[0], loss_type='CE', device='cpu').to('cpu') >>> model.attack(features, adj, labels, idx_train, n_perturbations=10) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, n_perturbations, epochs=200, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
epochs – number of training epochs
-
class
NIPA
(env, features, labels, idx_train, idx_val, idx_test, list_action_space, ratio, reward_type='binary', batch_size=30, num_wrong=0, bilin_q=1, embed_dim=64, gm='mean_field', mlp_hidden=64, max_lv=1, save_dir='checkpoint_dqn', device=None)[source]¶ Reinforcement learning agent for NIPA attack. https://faculty.ist.psu.edu/vhonavar/Papers/www20.pdf
- Parameters
env – Node attack environment
features – node features matrix
labels – labels
idx_meta – node meta indices
idx_test – node test indices
list_action_space (list) – list of action space
num_mod – number of modification (perturbation) on the graph
reward_type (str) – type of reward (e.g., ‘binary’)
batch_size – batch size for training DQN
save_dir – saving directory for model checkpoints
device (str) – ‘cpu’ or ‘cuda’
Examples
See more details in https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_nipa.py
-
class
NodeEmbeddingAttack
[source]¶ Node embedding attack. Adversarial Attacks on Node Embeddings via Graph Poisoning. Aleksandar Bojchevski and Stephan Günnemann, ICML 2019 http://proceedings.mlr.press/v97/bojchevski19a.html
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.global_attack import NodeEmbeddingAttack >>> data = Dataset(root='/tmp/', name='cora_ml', seed=15) >>> adj, features, labels = data.adj, data.features, data.labels >>> model = NodeEmbeddingAttack() >>> model.attack(adj, attack_type="remove") >>> modified_adj = model.modified_adj >>> model.attack(adj, attack_type="remove", min_span_tree=True) >>> modified_adj = model.modified_adj >>> model.attack(adj, attack_type="add", n_candidates=10000) >>> modified_adj = model.modified_adj >>> model.attack(adj, attack_type="add_by_remove", n_candidates=10000) >>> modified_adj = model.modified_adj
-
attack
(adj, n_perturbations=1000, dim=32, window_size=5, attack_type='remove', min_span_tree=False, n_candidates=None, seed=None, **kwargs)[source]¶ Selects the top (n_perturbations) number of flips using our perturbation attack.
- Parameters
adj – sp.spmatrix The graph represented as a sparse scipy matrix
n_perturbations – int Number of flips to select
dim – int Dimensionality of the embeddings.
window_size – int Co-occurence window size.
attack_type – str can be chosed from [“remove”, “add”, “add_by_remove”]
min_span_tree – bool Whether to disallow edges that lie on the minimum spanning tree; only valid when attack_type is “remove”
n_candidates – int Number of candiates for addition; only valid when attack_type is “add” or “add_by_remove”;
seed – int Random seed
-
flip_candidates
(adj, candidates)[source]¶ Flip the edges in the candidate set to non-edges and vise-versa.
- Parameters
adj – sp.csr_matrix, shape [n_nodes, n_nodes] Adjacency matrix of the graph
candidates – np.ndarray, shape [?, 2] Candidate set of edge flips
- Returns
sp.csr_matrix, shape [n_nodes, n_nodes] Adjacency matrix of the graph with the flipped edges/non-edges.
-
generate_candidates_addition
(adj, n_candidates, seed=None)[source]¶ Generates candidate edge flips for addition (non-edge -> edge).
- Parameters
adj – sp.csr_matrix, shape [n_nodes, n_nodes] Adjacency matrix of the graph
n_candidates – int Number of candidates to generate.
seed – int Random seed
- Returns
np.ndarray, shape [?, 2] Candidate set of edge flips
-
generate_candidates_removal
(adj, seed=None)[source]¶ Generates candidate edge flips for removal (edge -> non-edge), disallowing one random edge per node to prevent singleton nodes.
- Parameters
adj – sp.csr_matrix, shape [n_nodes, n_nodes] Adjacency matrix of the graph
seed – int Random seed
- Returns
np.ndarray, shape [?, 2] Candidate set of edge flips
-
generate_candidates_removal_minimum_spanning_tree
(adj)[source]¶ - Generates candidate edge flips for removal (edge -> non-edge),
disallowing edges that lie on the minimum spanning tree.
- adj: sp.csr_matrix, shape [n_nodes, n_nodes]
Adjacency matrix of the graph
- Returns
np.ndarray, shape [?, 2] Candidate set of edge flips
-
-
class
OtherNodeEmbeddingAttack
(type)[source]¶ Baseline methods from the paper Adversarial Attacks on Node Embeddings via Graph Poisoning. Aleksandar Bojchevski and Stephan Günnemann, ICML 2019. http://proceedings.mlr.press/v97/bojchevski19a.html
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.global_attack import OtherNodeEmbeddingAttack >>> data = Dataset(root='/tmp/', name='cora_ml', seed=15) >>> adj, features, labels = data.adj, data.features, data.labels >>> model = OtherNodeEmbeddingAttack(type='degree') >>> model.attack(adj, attack_type="remove") >>> modified_adj = model.modified_adj >>> # >>> model = OtherNodeEmbeddingAttack(type='eigencentrality') >>> model.attack(adj, attack_type="remove") >>> modified_adj = model.modified_adj >>> # >>> model = OtherNodeEmbeddingAttack(type='random') >>> model.attack(adj, attack_type="add", n_candidates=10000) >>> modified_adj = model.modified_adj
-
attack
(adj, n_perturbations=1000, attack_type='remove', min_span_tree=False, n_candidates=None, seed=None, **kwargs)[source]¶ Selects the top (n_perturbations) number of flips using our perturbation attack.
- Parameters
adj – sp.spmatrix The graph represented as a sparse scipy matrix
n_perturbations – int Number of flips to select
dim – int Dimensionality of the embeddings.
attack_type – str can be chosed from [“remove”, “add”]
min_span_tree – bool Whether to disallow edges that lie on the minimum spanning tree; only valid when attack_type is “remove”
n_candidates – int Number of candiates for addition; only valid when attack_type is “add”;
seed – int Random seed;
- Returns
np.ndarray, shape [?, 2] The top edge flips from the candidate set
-
degree_top_flips
(adj, candidates, n_perturbations, complement)[source]¶ Selects the top (n_perturbations) number of flips using degree centrality score of the edges.
- Parameters
adj – sp.spmatrix The graph represented as a sparse scipy matrix
candidates – np.ndarray, shape [?, 2] Candidate set of edge flips
n_perturbations – int Number of flips to select
complement – bool Whether to look at the complement graph
- Returns
np.ndarray, shape [?, 2] The top edge flips from the candidate set
-
eigencentrality_top_flips
(adj, candidates, n_perturbations)[source]¶ Selects the top (n_perturbations) number of flips using eigencentrality score of the edges. Applicable only when removing edges.
- Parameters
adj – sp.spmatrix The graph represented as a sparse scipy matrix
candidates – np.ndarray, shape [?, 2] Candidate set of edge flips
n_perturbations – int Number of flips to select
- Returns
np.ndarray, shape [?, 2] The top edge flips from the candidate set
-
random_top_flips
(candidates, n_perturbations, seed=None)[source]¶ Selects (n_perturbations) number of flips at random.
- Parameters
candidates – np.ndarray, shape [?, 2] Candidate set of edge flips
n_perturbations – int Number of flips to select
seed – int Random seed
- Returns
np.ndarray, shape [?, 2] The top edge flips from the candidate set
-