deeprobust.graph.targeted_attack package¶
Submodules¶
deeprobust.graph.targeted_attack.base_attack module¶
-
class
BaseAttack
(model, nnodes, attack_structure=True, attack_features=False, device='cpu')[source]¶ Abstract base class for target attack classes.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
-
attack
(ori_adj, n_perturbations, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
- Returns
- Return type
None.
deeprobust.graph.targeted_attack.fga module¶
FGA: Fast Gradient Attack on Network Embedding (https://arxiv.org/pdf/1809.02797.pdf) Another very similar algorithm to mention here is FGSM (for graph data). It is mentioned in Zugner’s paper, Adversarial Attacks on Neural Networks for Graph Data, KDD’19
-
class
FGA
(model, nnodes, feature_shape=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ FGA/FGSM.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
feature_shape (tuple) – shape of the input node features
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.targeted_attack import FGA >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> target_node = 0 >>> model = FGA(surrogate, nnodes=adj.shape[0], attack_structure=True, attack_features=False, device='cpu').to('cpu') >>> # Attack >>> model.attack(features, adj, labels, idx_train, target_node, n_perturbations=5) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, target_node, n_perturbations, verbose=False, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) node feature matrix
labels – node labels
idx_train – training node indices
target_node (int) – target node index to be attacked
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
deeprobust.graph.targeted_attack.ig_attack module¶
- Adversarial Examples on Graph Data: Deep Insights into Attack and Defense
-
class
IGAttack
(model, nnodes=None, feature_shape=None, attack_structure=True, attack_features=True, device='cpu')[source]¶ IGAttack: IG-FGSM. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, https://arxiv.org/pdf/1903.01610.pdf.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
feature_shape (tuple) – shape of the input node features
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.targeted_attack import IGAttack >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> target_node = 0 >>> model = IGAttack(surrogate, nnodes=adj.shape[0], attack_structure=True, attack_features=True, device='cpu').to('cpu') >>> # Attack >>> model.attack(features, adj, labels, idx_train, target_node, n_perturbations=5, steps=10) >>> modified_adj = model.modified_adj >>> modified_features = model.modified_features
-
attack
(ori_features, ori_adj, labels, idx_train, target_node, n_perturbations, steps=10, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – training nodes indices
target_node (int) – target node index to be attacked
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
steps (int) – steps for computing integrated gradients
-
calc_importance_edge
(features, adj_norm, labels, steps)[source]¶ Calculate integrated gradient for edges. Although I think the the gradient should be with respect to adj instead of adj_norm, but the calculation is too time-consuming. So I finally decided to calculate the gradient of loss with respect to adj_norm
deeprobust.graph.targeted_attack.nettack module¶
- Adversarial Attacks on Neural Networks for Graph Data. KDD 2018.
- Author’s Implementation
Since pytorch does not have good enough support to the operations on sparse tensor, this part of code is heavily based on the author’s implementation.
-
class
Nettack
(model, nnodes=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ Nettack.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.targeted_attack import Nettack >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> target_node = 0 >>> model = Nettack(surrogate, nnodes=adj.shape[0], attack_structure=True, attack_features=True, device='cpu').to('cpu') >>> # Attack >>> model.attack(features, adj, labels, target_node, n_perturbations=5) >>> modified_adj = model.modified_adj >>> modified_features = model.modified_features
-
attack
(features, adj, labels, target_node, n_perturbations, direct=True, n_influencers=0, ll_cutoff=0.004, verbose=True, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features (torch.Tensor or scipy.sparse.csr_matrix) – Origina (unperturbed) node feature matrix. Note that torch.Tensor will be automatically transformed into scipy.sparse.csr_matrix
ori_adj (torch.Tensor or scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix. Note that torch.Tensor will be automatically transformed into scipy.sparse.csr_matrix
labels – node labels
target_node (int) – target node index to be attacked
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
direct (bool) – whether to conduct direct attack
n_influencers – number of influencer nodes when performing indirect attack. (setting direct to False). When direct is True, it would be ignored.
ll_cutoff (float) – The critical value for the likelihood ratio test of the power law distributions. See the Chi square distribution with one degree of freedom. Default value 0.004 corresponds to a p-value of roughly 0.95.
verbose (bool) – whether to show verbose logs
-
compute_cooccurrence_constraint
(nodes)[source]¶ Co-occurrence constraint as described in the paper.
- Parameters
nodes (np.array) – Nodes whose features are considered for change
- Returns
Binary matrix of dimension len(nodes) x D. A 1 in entry n,d indicates that we are allowed to add feature d to the features of node n.
- Return type
np.array [len(nodes), D], dtype bool
-
compute_new_a_hat_uv
(potential_edges, target_node)[source]¶ Compute the updated A_hat_square_uv entries that would result from inserting/deleting the input edges, for every edge.
- Parameters
potential_edges (np.array, shape [P,2], dtype int) – The edges to check.
- Returns
sp.sparse_matrix
- Return type
updated A_hat_square_u entries, a sparse PxN matrix, where P is len(possible_edges)
-
filter_potential_singletons
(modified_adj)[source]¶ Computes a mask for entries potentially leading to singleton nodes, i.e. one of the two nodes corresponding to the entry have degree 1 and there is an edge between the two nodes.
-
get_attacker_nodes
(n=5, add_additional_nodes=False)[source]¶ Determine the influencer nodes to attack node i based on the weights W and the attributes X.
-
struct_score
(a_hat_uv, XW)[source]¶ Compute structure scores, cf. Eq. 15 in the paper
- Parameters
a_hat_uv (sp.sparse_matrix, shape [P,2]) – Entries of matrix A_hat^2_u for each potential edge (see paper for explanation)
XW (sp.sparse_matrix, shape [N, K], dtype float) – The class logits for each node.
- Returns
The struct score for every row in a_hat_uv
- Return type
np.array [P,]
-
compute_new_a_hat_uv
[source]¶ Compute the new values [A_hat_square]_u for every potential edge, where u is the target node. C.f. Theorem 5.1 equation 17.
deeprobust.graph.targeted_attack.rl_s2v module¶
- Adversarial Attacks on Neural Networks for Graph Data. ICML 2018.
- Author’s Implementation
This part of code is adopted from the author’s implementation (Copyright (c) 2018 Dai, Hanjun and Li, Hui and Tian, Tian and Huang, Xin and Wang, Lin and Zhu, Jun and Song, Le) but modified to be integrated into the repository.
-
class
RLS2V
(env, features, labels, idx_meta, idx_test, list_action_space, num_mod, reward_type, batch_size=10, num_wrong=0, bilin_q=1, embed_dim=64, gm='mean_field', mlp_hidden=64, max_lv=1, save_dir='checkpoint_dqn', device=None)[source]¶ Reinforcement learning agent for RL-S2V attack.
- Parameters
env – Node attack environment
features – node features matrix
labels – labels
idx_meta – node meta indices
idx_test – node test indices
list_action_space (list) – list of action space
num_mod – number of modification (perturbation) on the graph
reward_type (str) – type of reward (e.g., ‘binary’)
batch_size – batch size for training DQN
save_dir – saving directory for model checkpoints
device (str) – ‘cpu’ or ‘cuda’
Examples
See details in https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_rl_s2v.py
deeprobust.graph.targeted_attack.rnd module¶
-
class
RND
(model=None, nnodes=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ As is described in Adversarial Attacks on Neural Networks for Graph Data (KDD’19), ‘Rnd is an attack in which we modify the structure of the graph. Given our target node v, in each step we randomly sample nodes u whose lable is different from v and add the edge u,v to the graph structure
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.targeted_attack import RND >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Attack Model >>> target_node = 0 >>> model = RND() >>> # Attack >>> model.attack(adj, labels, idx_train, target_node, n_perturbations=5) >>> modified_adj = model.modified_adj >>> # # You can also inject nodes >>> # model.add_nodes(features, adj, labels, idx_train, target_node, n_added=10, n_perturbations=100) >>> # modified_adj = model.modified_adj
-
add_nodes
(features, ori_adj, labels, idx_train, target_node, n_added=1, n_perturbations=10, **kwargs)[source]¶ For each added node, first connect the target node with added fake nodes. Then randomly connect the fake nodes with other nodes whose label is different from target node. As for the node feature, simply copy arbitary node
-
attack
(ori_adj, labels, idx_train, target_node, n_perturbations, **kwargs)[source]¶ Randomly sample nodes u whose lable is different from v and add the edge u,v to the graph structure. This baseline only has access to true class labels in training set
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
target_node (int) – target node index to be attacked
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
Module contents¶
-
class
BaseAttack
(model, nnodes, attack_structure=True, attack_features=False, device='cpu')[source]¶ Abstract base class for target attack classes.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
-
attack
(ori_adj, n_perturbations, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix.
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
- Returns
- Return type
None.
-
class
FGA
(model, nnodes, feature_shape=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ FGA/FGSM.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
feature_shape (tuple) – shape of the input node features
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.targeted_attack import FGA >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> target_node = 0 >>> model = FGA(surrogate, nnodes=adj.shape[0], attack_structure=True, attack_features=False, device='cpu').to('cpu') >>> # Attack >>> model.attack(features, adj, labels, idx_train, target_node, n_perturbations=5) >>> modified_adj = model.modified_adj
-
attack
(ori_features, ori_adj, labels, idx_train, target_node, n_perturbations, verbose=False, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) node feature matrix
labels – node labels
idx_train – training node indices
target_node (int) – target node index to be attacked
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
-
class
RND
(model=None, nnodes=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ As is described in Adversarial Attacks on Neural Networks for Graph Data (KDD’19), ‘Rnd is an attack in which we modify the structure of the graph. Given our target node v, in each step we randomly sample nodes u whose lable is different from v and add the edge u,v to the graph structure
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.targeted_attack import RND >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Attack Model >>> target_node = 0 >>> model = RND() >>> # Attack >>> model.attack(adj, labels, idx_train, target_node, n_perturbations=5) >>> modified_adj = model.modified_adj >>> # # You can also inject nodes >>> # model.add_nodes(features, adj, labels, idx_train, target_node, n_added=10, n_perturbations=100) >>> # modified_adj = model.modified_adj
-
add_nodes
(features, ori_adj, labels, idx_train, target_node, n_added=1, n_perturbations=10, **kwargs)[source]¶ For each added node, first connect the target node with added fake nodes. Then randomly connect the fake nodes with other nodes whose label is different from target node. As for the node feature, simply copy arbitary node
-
attack
(ori_adj, labels, idx_train, target_node, n_perturbations, **kwargs)[source]¶ Randomly sample nodes u whose lable is different from v and add the edge u,v to the graph structure. This baseline only has access to true class labels in training set
- Parameters
ori_adj (scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – node training indices
target_node (int) – target node index to be attacked
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
-
class
Nettack
(model, nnodes=None, attack_structure=True, attack_features=False, device='cpu')[source]¶ Nettack.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.targeted_attack import Nettack >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> target_node = 0 >>> model = Nettack(surrogate, nnodes=adj.shape[0], attack_structure=True, attack_features=True, device='cpu').to('cpu') >>> # Attack >>> model.attack(features, adj, labels, target_node, n_perturbations=5) >>> modified_adj = model.modified_adj >>> modified_features = model.modified_features
-
attack
(features, adj, labels, target_node, n_perturbations, direct=True, n_influencers=0, ll_cutoff=0.004, verbose=True, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features (torch.Tensor or scipy.sparse.csr_matrix) – Origina (unperturbed) node feature matrix. Note that torch.Tensor will be automatically transformed into scipy.sparse.csr_matrix
ori_adj (torch.Tensor or scipy.sparse.csr_matrix) – Original (unperturbed) adjacency matrix. Note that torch.Tensor will be automatically transformed into scipy.sparse.csr_matrix
labels – node labels
target_node (int) – target node index to be attacked
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
direct (bool) – whether to conduct direct attack
n_influencers – number of influencer nodes when performing indirect attack. (setting direct to False). When direct is True, it would be ignored.
ll_cutoff (float) – The critical value for the likelihood ratio test of the power law distributions. See the Chi square distribution with one degree of freedom. Default value 0.004 corresponds to a p-value of roughly 0.95.
verbose (bool) – whether to show verbose logs
-
compute_cooccurrence_constraint
(nodes)[source]¶ Co-occurrence constraint as described in the paper.
- Parameters
nodes (np.array) – Nodes whose features are considered for change
- Returns
Binary matrix of dimension len(nodes) x D. A 1 in entry n,d indicates that we are allowed to add feature d to the features of node n.
- Return type
np.array [len(nodes), D], dtype bool
-
compute_new_a_hat_uv
(potential_edges, target_node)[source]¶ Compute the updated A_hat_square_uv entries that would result from inserting/deleting the input edges, for every edge.
- Parameters
potential_edges (np.array, shape [P,2], dtype int) – The edges to check.
- Returns
sp.sparse_matrix
- Return type
updated A_hat_square_u entries, a sparse PxN matrix, where P is len(possible_edges)
-
filter_potential_singletons
(modified_adj)[source]¶ Computes a mask for entries potentially leading to singleton nodes, i.e. one of the two nodes corresponding to the entry have degree 1 and there is an edge between the two nodes.
-
get_attacker_nodes
(n=5, add_additional_nodes=False)[source]¶ Determine the influencer nodes to attack node i based on the weights W and the attributes X.
-
struct_score
(a_hat_uv, XW)[source]¶ Compute structure scores, cf. Eq. 15 in the paper
- Parameters
a_hat_uv (sp.sparse_matrix, shape [P,2]) – Entries of matrix A_hat^2_u for each potential edge (see paper for explanation)
XW (sp.sparse_matrix, shape [N, K], dtype float) – The class logits for each node.
- Returns
The struct score for every row in a_hat_uv
- Return type
np.array [P,]
-
class
IGAttack
(model, nnodes=None, feature_shape=None, attack_structure=True, attack_features=True, device='cpu')[source]¶ IGAttack: IG-FGSM. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, https://arxiv.org/pdf/1903.01610.pdf.
- Parameters
model – model to attack
nnodes (int) – number of nodes in the input graph
feature_shape (tuple) – shape of the input node features
attack_structure (bool) – whether to attack graph structure
attack_features (bool) – whether to attack node features
device (str) – ‘cpu’ or ‘cuda’
Examples
>>> from deeprobust.graph.data import Dataset >>> from deeprobust.graph.defense import GCN >>> from deeprobust.graph.targeted_attack import IGAttack >>> data = Dataset(root='/tmp/', name='cora') >>> adj, features, labels = data.adj, data.features, data.labels >>> idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test >>> # Setup Surrogate model >>> surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, dropout=0, with_relu=False, with_bias=False, device='cpu').to('cpu') >>> surrogate.fit(features, adj, labels, idx_train, idx_val, patience=30) >>> # Setup Attack Model >>> target_node = 0 >>> model = IGAttack(surrogate, nnodes=adj.shape[0], attack_structure=True, attack_features=True, device='cpu').to('cpu') >>> # Attack >>> model.attack(features, adj, labels, idx_train, target_node, n_perturbations=5, steps=10) >>> modified_adj = model.modified_adj >>> modified_features = model.modified_features
-
attack
(ori_features, ori_adj, labels, idx_train, target_node, n_perturbations, steps=10, **kwargs)[source]¶ Generate perturbations on the input graph.
- Parameters
ori_features – Original (unperturbed) node feature matrix
ori_adj – Original (unperturbed) adjacency matrix
labels – node labels
idx_train – training nodes indices
target_node (int) – target node index to be attacked
n_perturbations (int) – Number of perturbations on the input graph. Perturbations could be edge removals/additions or feature removals/additions.
steps (int) – steps for computing integrated gradients
-
calc_importance_edge
(features, adj_norm, labels, steps)[source]¶ Calculate integrated gradient for edges. Although I think the the gradient should be with respect to adj instead of adj_norm, but the calculation is too time-consuming. So I finally decided to calculate the gradient of loss with respect to adj_norm
-
class
RLS2V
(env, features, labels, idx_meta, idx_test, list_action_space, num_mod, reward_type, batch_size=10, num_wrong=0, bilin_q=1, embed_dim=64, gm='mean_field', mlp_hidden=64, max_lv=1, save_dir='checkpoint_dqn', device=None)[source]¶ Reinforcement learning agent for RL-S2V attack.
- Parameters
env – Node attack environment
features – node features matrix
labels – labels
idx_meta – node meta indices
idx_test – node test indices
list_action_space (list) – list of action space
num_mod – number of modification (perturbation) on the graph
reward_type (str) – type of reward (e.g., ‘binary’)
batch_size – batch size for training DQN
save_dir – saving directory for model checkpoints
device (str) – ‘cpu’ or ‘cuda’
Examples
See details in https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_rl_s2v.py