
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker SGDMWorker(index=9, momentum=0.9)
=> Add worker SGDMWorker(index=10, momentum=0.9)
=> Add worker LabelFlippingWorker

=== Start adding graph ===
TwoCliquesWithByzantine(m=5,b=1)

Train epoch 1
[E 1B0  |    384/60000 (  1%) ] Loss: 2.3142 top1=  7.6705

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 0, 1], device='cuda:0')
Worker 2 has targets: tensor([1, 1, 2, 1, 1], device='cuda:0')
Worker 3 has targets: tensor([2, 2, 3, 2, 2], device='cuda:0')
Worker 4 has targets: tensor([3, 3, 3, 3, 3], device='cuda:0')
Worker 5 has targets: tensor([4, 4, 4, 4, 4], device='cuda:0')
Worker 6 has targets: tensor([5, 5, 5, 5, 5], device='cuda:0')
Worker 7 has targets: tensor([6, 6, 6, 5, 6], device='cuda:0')
Worker 8 has targets: tensor([6, 7, 7, 6, 6], device='cuda:0')
Worker 9 has targets: tensor([7, 7, 8, 7, 7], device='cuda:0')
Worker 10 has targets: tensor([8, 8, 9, 8, 8], device='cuda:0')
Worker 11 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.000



=== Log clique consensus distance @ E1B0 ===
clique1_consensus_distance=0.000
clique2_consensus_distance=0.000



=== Log mixing matrix @ E1B0 ===
[[0.167 0.167 0.167 0.167 0.167 0.    0.    0.    0.    0.    0.167 0.   ]
 [0.167 0.233 0.2   0.2   0.2   0.    0.    0.    0.    0.    0.    0.   ]
 [0.167 0.2   0.233 0.2   0.2   0.    0.    0.    0.    0.    0.    0.   ]
 [0.167 0.2   0.2   0.233 0.2   0.    0.    0.    0.    0.    0.    0.   ]
 [0.167 0.2   0.2   0.2   0.233 0.    0.    0.    0.    0.    0.    0.   ]
 [0.    0.    0.    0.    0.    0.233 0.2   0.2   0.2   0.167 0.    0.   ]
 [0.    0.    0.    0.    0.    0.2   0.233 0.2   0.2   0.167 0.    0.   ]
 [0.    0.    0.    0.    0.    0.2   0.2   0.233 0.2   0.167 0.    0.   ]
 [0.    0.    0.    0.    0.    0.2   0.2   0.2   0.233 0.167 0.    0.   ]
 [0.    0.    0.    0.    0.    0.167 0.167 0.167 0.167 0.167 0.167 0.   ]
 [0.167 0.    0.    0.    0.    0.    0.    0.    0.    0.167 0.667 0.   ]
 [0.    0.    0.    0.    0.    0.    0.    0.    0.    0.    0.    1.   ]]



=> Averaged model (Global Average Validation Accuracy) | Eval Loss=2.2688 top1= 18.3994


=> Averaged model (Clique1 Average Validation Accuracy) | Eval Loss=2.2785 top1= 13.4615


=> Averaged model (Clique2 Average Validation Accuracy) | Eval Loss=2.2855 top1= 18.0589

Train epoch 2
[E 2B0  |    384/60000 (  1%) ] Loss: 2.0447 top1= 46.3068

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.008



=== Log clique consensus distance @ E2B0 ===
clique1_consensus_distance=0.000
clique2_consensus_distance=0.031



=> Averaged model (Global Average Validation Accuracy) | Eval Loss=2.1972 top1= 19.3409


=> Averaged model (Clique1 Average Validation Accuracy) | Eval Loss=2.3739 top1= 18.3093


=> Averaged model (Clique2 Average Validation Accuracy) | Eval Loss=2.2916 top1= 28.2151

