
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker SGDMWorker(index=9, momentum=0.9)
=> Add worker ByzantineWorker(index=10)
=> Add worker ByzantineWorker(index=11)

=== Start adding graph ===
<codes.graph_utils.RandomSmallWorldGraph object at 0x7f201dc82400>

Train epoch 1
[E 1B0  |    384/60000 (  1%) ] Loss: 2.3054 top1= 10.0000

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([1, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([2, 3, 3, 3, 3], device='cuda:0')
Worker 4 has targets: tensor([3, 4, 4, 4, 4], device='cuda:0')
Worker 5 has targets: tensor([4, 5, 5, 5, 5], device='cuda:0')
Worker 6 has targets: tensor([6, 6, 6, 6, 6], device='cuda:0')
Worker 7 has targets: tensor([7, 7, 7, 7, 7], device='cuda:0')
Worker 8 has targets: tensor([7, 8, 8, 8, 8], device='cuda:0')
Worker 9 has targets: tensor([8, 9, 9, 9, 9], device='cuda:0')
Worker 10 has targets: tensor([4, 8, 8, 6, 9], device='cuda:0')
Worker 11 has targets: tensor([5, 3, 6, 0, 9], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.017



=== Log average shortest path distance for small world @ E1B0 ===
2.7777777777777777


[E 1B10 |   4224/60000 (  7%) ] Loss: 0.6013 top1= 82.1875
[E 1B20 |   8064/60000 ( 13%) ] Loss: 0.2344 top1= 96.5625
[E 1B30 |  11904/60000 ( 20%) ] Loss: 0.2390 top1= 95.0000
[E 1B40 |  15744/60000 ( 26%) ] Loss: 0.1393 top1= 97.1875
[E 1B50 |  19584/60000 ( 33%) ] Loss: 0.1574 top1= 95.9375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=2.0305 top1= 29.8478

Train epoch 2
[E 2B0  |    384/60000 (  1%) ] Loss: 0.2633 top1= 92.8125

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.224


[E 2B10 |   4224/60000 (  7%) ] Loss: 0.1337 top1= 95.0000
[E 2B20 |   8064/60000 ( 13%) ] Loss: 0.0734 top1= 99.3750
[E 2B30 |  11904/60000 ( 20%) ] Loss: 0.0669 top1= 98.4375
[E 2B40 |  15744/60000 ( 26%) ] Loss: 0.1020 top1= 97.5000
[E 2B50 |  19584/60000 ( 33%) ] Loss: 0.0879 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7961 top1= 31.7107

Train epoch 3
[E 3B0  |    384/60000 (  1%) ] Loss: 0.1595 top1= 94.6875

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.318


[E 3B10 |   4224/60000 (  7%) ] Loss: 0.0746 top1= 98.1250
[E 3B20 |   8064/60000 ( 13%) ] Loss: 0.0477 top1= 99.0625
[E 3B30 |  11904/60000 ( 20%) ] Loss: 0.0656 top1= 99.0625
[E 3B40 |  15744/60000 ( 26%) ] Loss: 0.0407 top1= 99.3750
[E 3B50 |  19584/60000 ( 33%) ] Loss: 0.1206 top1= 95.9375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7789 top1= 32.2616

Train epoch 4
[E 4B0  |    384/60000 (  1%) ] Loss: 0.1546 top1= 95.6250

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.376


[E 4B10 |   4224/60000 (  7%) ] Loss: 0.0379 top1= 99.3750
[E 4B20 |   8064/60000 ( 13%) ] Loss: 0.0673 top1= 98.4375
[E 4B30 |  11904/60000 ( 20%) ] Loss: 0.0605 top1= 98.1250
[E 4B40 |  15744/60000 ( 26%) ] Loss: 0.1223 top1= 96.5625
[E 4B50 |  19584/60000 ( 33%) ] Loss: 0.0590 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7853 top1= 34.7155

Train epoch 5
[E 5B0  |    384/60000 (  1%) ] Loss: 0.1099 top1= 96.8750

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.433


[E 5B10 |   4224/60000 (  7%) ] Loss: 0.0492 top1= 98.1250
[E 5B20 |   8064/60000 ( 13%) ] Loss: 0.0322 top1= 99.0625
[E 5B30 |  11904/60000 ( 20%) ] Loss: 0.0402 top1= 98.1250
[E 5B40 |  15744/60000 ( 26%) ] Loss: 0.1180 top1= 96.2500
[E 5B50 |  19584/60000 ( 33%) ] Loss: 0.0643 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7462 top1= 38.4615

Train epoch 6
[E 6B0  |    384/60000 (  1%) ] Loss: 0.0862 top1= 97.5000

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.465


[E 6B10 |   4224/60000 (  7%) ] Loss: 0.0313 top1= 99.6875
[E 6B20 |   8064/60000 ( 13%) ] Loss: 0.0490 top1= 98.7500
[E 6B30 |  11904/60000 ( 20%) ] Loss: 0.0809 top1= 96.8750
[E 6B40 |  15744/60000 ( 26%) ] Loss: 0.1510 top1= 95.3125
[E 6B50 |  19584/60000 ( 33%) ] Loss: 0.0735 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7650 top1= 36.8890

Train epoch 7
[E 7B0  |    384/60000 (  1%) ] Loss: 0.1675 top1= 94.3750

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.503


[E 7B10 |   4224/60000 (  7%) ] Loss: 0.0664 top1= 98.4375
[E 7B20 |   8064/60000 ( 13%) ] Loss: 0.0208 top1= 99.0625
[E 7B30 |  11904/60000 ( 20%) ] Loss: 0.1003 top1= 96.8750
[E 7B40 |  15744/60000 ( 26%) ] Loss: 0.0313 top1= 99.6875
[E 7B50 |  19584/60000 ( 33%) ] Loss: 0.0546 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7746 top1= 36.2881

Train epoch 8
[E 8B0  |    384/60000 (  1%) ] Loss: 0.1247 top1= 95.6250

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.538


[E 8B10 |   4224/60000 (  7%) ] Loss: 0.0321 top1= 99.3750
[E 8B20 |   8064/60000 ( 13%) ] Loss: 0.0615 top1= 98.1250
[E 8B30 |  11904/60000 ( 20%) ] Loss: 0.0285 top1= 99.0625
[E 8B40 |  15744/60000 ( 26%) ] Loss: 0.0298 top1= 98.4375
[E 8B50 |  19584/60000 ( 33%) ] Loss: 0.0626 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7173 top1= 39.7536

Train epoch 9
[E 9B0  |    384/60000 (  1%) ] Loss: 0.1529 top1= 97.1875

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.569


[E 9B10 |   4224/60000 (  7%) ] Loss: 0.0562 top1= 98.4375
[E 9B20 |   8064/60000 ( 13%) ] Loss: 0.0261 top1= 99.6875
[E 9B30 |  11904/60000 ( 20%) ] Loss: 0.0421 top1= 98.4375
[E 9B40 |  15744/60000 ( 26%) ] Loss: 0.0209 top1= 99.0625
[E 9B50 |  19584/60000 ( 33%) ] Loss: 0.0472 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.6916 top1= 40.7151

Train epoch 10
[E10B0  |    384/60000 (  1%) ] Loss: 0.1142 top1= 97.1875

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.583


[E10B10 |   4224/60000 (  7%) ] Loss: 0.0530 top1= 97.8125
[E10B20 |   8064/60000 ( 13%) ] Loss: 0.0229 top1= 99.3750
[E10B30 |  11904/60000 ( 20%) ] Loss: 0.0262 top1= 99.6875
[E10B40 |  15744/60000 ( 26%) ] Loss: 0.0305 top1= 99.0625
[E10B50 |  19584/60000 ( 33%) ] Loss: 0.0572 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7338 top1= 39.0024

Train epoch 11
[E11B0  |    384/60000 (  1%) ] Loss: 0.1470 top1= 97.1875

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.617


[E11B10 |   4224/60000 (  7%) ] Loss: 0.1104 top1= 95.9375
[E11B20 |   8064/60000 ( 13%) ] Loss: 0.0558 top1= 99.6875
[E11B30 |  11904/60000 ( 20%) ] Loss: 0.0290 top1= 99.3750
[E11B40 |  15744/60000 ( 26%) ] Loss: 0.0143 top1=100.0000
[E11B50 |  19584/60000 ( 33%) ] Loss: 0.1554 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7764 top1= 38.7921

Train epoch 12
[E12B0  |    384/60000 (  1%) ] Loss: 0.0877 top1= 97.8125

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.632


[E12B10 |   4224/60000 (  7%) ] Loss: 0.0580 top1= 98.1250
[E12B20 |   8064/60000 ( 13%) ] Loss: 0.0218 top1= 99.3750
[E12B30 |  11904/60000 ( 20%) ] Loss: 0.0370 top1= 99.0625
[E12B40 |  15744/60000 ( 26%) ] Loss: 0.0427 top1= 98.1250
[E12B50 |  19584/60000 ( 33%) ] Loss: 0.0567 top1= 99.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7749 top1= 39.9740

Train epoch 13
[E13B0  |    384/60000 (  1%) ] Loss: 0.0770 top1= 96.8750

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.659


[E13B10 |   4224/60000 (  7%) ] Loss: 0.0346 top1= 99.3750
[E13B20 |   8064/60000 ( 13%) ] Loss: 0.0158 top1= 99.6875
[E13B30 |  11904/60000 ( 20%) ] Loss: 0.0265 top1= 99.3750
[E13B40 |  15744/60000 ( 26%) ] Loss: 0.0250 top1= 99.3750
[E13B50 |  19584/60000 ( 33%) ] Loss: 0.0637 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8060 top1= 39.3429

Train epoch 14
[E14B0  |    384/60000 (  1%) ] Loss: 0.0509 top1= 98.1250

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.688


[E14B10 |   4224/60000 (  7%) ] Loss: 0.0268 top1= 99.0625
[E14B20 |   8064/60000 ( 13%) ] Loss: 0.0225 top1= 98.7500
[E14B30 |  11904/60000 ( 20%) ] Loss: 0.0182 top1= 99.6875
[E14B40 |  15744/60000 ( 26%) ] Loss: 0.0263 top1= 99.0625
[E14B50 |  19584/60000 ( 33%) ] Loss: 0.0467 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7976 top1= 40.5349

Train epoch 15
[E15B0  |    384/60000 (  1%) ] Loss: 0.1131 top1= 98.1250

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.698


[E15B10 |   4224/60000 (  7%) ] Loss: 0.0288 top1= 99.0625
[E15B20 |   8064/60000 ( 13%) ] Loss: 0.0180 top1=100.0000
[E15B30 |  11904/60000 ( 20%) ] Loss: 0.0396 top1= 98.7500
[E15B40 |  15744/60000 ( 26%) ] Loss: 0.0138 top1=100.0000
[E15B50 |  19584/60000 ( 33%) ] Loss: 0.2538 top1= 99.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8185 top1= 40.9856

Train epoch 16
[E16B0  |    384/60000 (  1%) ] Loss: 0.0526 top1= 98.1250

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.725


[E16B10 |   4224/60000 (  7%) ] Loss: 0.0525 top1= 98.7500
[E16B20 |   8064/60000 ( 13%) ] Loss: 0.0403 top1= 98.1250
[E16B30 |  11904/60000 ( 20%) ] Loss: 0.0178 top1= 99.6875
[E16B40 |  15744/60000 ( 26%) ] Loss: 0.0401 top1= 99.0625
[E16B50 |  19584/60000 ( 33%) ] Loss: 0.0572 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8266 top1= 39.9740

Train epoch 17
[E17B0  |    384/60000 (  1%) ] Loss: 0.0682 top1= 98.1250

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.748


[E17B10 |   4224/60000 (  7%) ] Loss: 0.0243 top1= 99.6875
[E17B20 |   8064/60000 ( 13%) ] Loss: 0.1068 top1= 96.2500
[E17B30 |  11904/60000 ( 20%) ] Loss: 0.0545 top1= 98.4375
[E17B40 |  15744/60000 ( 26%) ] Loss: 0.0114 top1=100.0000
[E17B50 |  19584/60000 ( 33%) ] Loss: 0.0539 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7787 top1= 42.2776

Train epoch 18
[E18B0  |    384/60000 (  1%) ] Loss: 0.2271 top1= 97.8125

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.752


[E18B10 |   4224/60000 (  7%) ] Loss: 0.0668 top1= 99.3750
[E18B20 |   8064/60000 ( 13%) ] Loss: 0.0731 top1= 99.3750
[E18B30 |  11904/60000 ( 20%) ] Loss: 0.0422 top1= 98.7500
[E18B40 |  15744/60000 ( 26%) ] Loss: 0.0256 top1= 98.7500
[E18B50 |  19584/60000 ( 33%) ] Loss: 0.1000 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7928 top1= 41.9772

Train epoch 19
[E19B0  |    384/60000 (  1%) ] Loss: 0.0541 top1= 97.8125

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.774


[E19B10 |   4224/60000 (  7%) ] Loss: 0.0198 top1= 99.6875
[E19B20 |   8064/60000 ( 13%) ] Loss: 0.0486 top1= 97.8125
[E19B30 |  11904/60000 ( 20%) ] Loss: 0.0580 top1= 99.0625
[E19B40 |  15744/60000 ( 26%) ] Loss: 0.0382 top1= 98.4375
[E19B50 |  19584/60000 ( 33%) ] Loss: 0.0472 top1= 99.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7095 top1= 46.2039

Train epoch 20
[E20B0  |    384/60000 (  1%) ] Loss: 0.3280 top1= 98.1250

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.826


[E20B10 |   4224/60000 (  7%) ] Loss: 0.0372 top1= 99.6875
[E20B20 |   8064/60000 ( 13%) ] Loss: 0.0791 top1= 99.0625
[E20B30 |  11904/60000 ( 20%) ] Loss: 0.0256 top1= 99.6875
[E20B40 |  15744/60000 ( 26%) ] Loss: 0.0131 top1=100.0000
[E20B50 |  19584/60000 ( 33%) ] Loss: 0.0399 top1= 99.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7457 top1= 43.6699

Train epoch 21
[E21B0  |    384/60000 (  1%) ] Loss: 0.0237 top1= 99.0625

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.815


[E21B10 |   4224/60000 (  7%) ] Loss: 0.0320 top1= 99.3750
[E21B20 |   8064/60000 ( 13%) ] Loss: 0.0620 top1= 97.8125
[E21B30 |  11904/60000 ( 20%) ] Loss: 0.0512 top1= 99.0625
[E21B40 |  15744/60000 ( 26%) ] Loss: 0.0138 top1=100.0000
[E21B50 |  19584/60000 ( 33%) ] Loss: 0.0470 top1= 99.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7526 top1= 50.9816

Train epoch 22
[E22B0  |    384/60000 (  1%) ] Loss: 0.5146 top1= 97.8125

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.872


[E22B10 |   4224/60000 (  7%) ] Loss: 0.0478 top1= 98.7500
[E22B20 |   8064/60000 ( 13%) ] Loss: 0.0511 top1= 97.8125
[E22B30 |  11904/60000 ( 20%) ] Loss: 0.0256 top1= 99.6875
[E22B40 |  15744/60000 ( 26%) ] Loss: 0.0088 top1=100.0000
[E22B50 |  19584/60000 ( 33%) ] Loss: 0.0738 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7591 top1= 46.4744

Train epoch 23
[E23B0  |    384/60000 (  1%) ] Loss: 0.4083 top1= 98.1250

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.861


[E23B10 |   4224/60000 (  7%) ] Loss: 0.0124 top1= 99.6875
[E23B20 |   8064/60000 ( 13%) ] Loss: 0.0718 top1= 96.5625
[E23B30 |  11904/60000 ( 20%) ] Loss: 0.0209 top1= 99.6875
[E23B40 |  15744/60000 ( 26%) ] Loss: 0.0161 top1=100.0000
[E23B50 |  19584/60000 ( 33%) ] Loss: 0.0556 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7845 top1= 43.0889

Train epoch 24
[E24B0  |    384/60000 (  1%) ] Loss: 0.0607 top1= 97.8125

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.855


[E24B10 |   4224/60000 (  7%) ] Loss: 0.1256 top1= 95.9375
[E24B20 |   8064/60000 ( 13%) ] Loss: 0.0198 top1= 99.3750
[E24B30 |  11904/60000 ( 20%) ] Loss: 0.0264 top1= 99.3750
[E24B40 |  15744/60000 ( 26%) ] Loss: 0.0075 top1=100.0000
[E24B50 |  19584/60000 ( 33%) ] Loss: 0.0521 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7977 top1= 43.5797

Train epoch 25
[E25B0  |    384/60000 (  1%) ] Loss: 0.0535 top1= 98.4375

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.864


[E25B10 |   4224/60000 (  7%) ] Loss: 0.0416 top1= 98.7500
[E25B20 |   8064/60000 ( 13%) ] Loss: 0.0362 top1= 98.4375
[E25B30 |  11904/60000 ( 20%) ] Loss: 0.0607 top1= 99.0625
[E25B40 |  15744/60000 ( 26%) ] Loss: 0.0071 top1= 99.6875
[E25B50 |  19584/60000 ( 33%) ] Loss: 0.0425 top1= 99.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8327 top1= 42.8786

Train epoch 26
[E26B0  |    384/60000 (  1%) ] Loss: 0.0584 top1= 98.1250

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.883


[E26B10 |   4224/60000 (  7%) ] Loss: 0.0570 top1= 98.1250
[E26B20 |   8064/60000 ( 13%) ] Loss: 0.0162 top1=100.0000
[E26B30 |  11904/60000 ( 20%) ] Loss: 0.0937 top1= 99.0625
[E26B40 |  15744/60000 ( 26%) ] Loss: 0.1945 top1= 95.0000
[E26B50 |  19584/60000 ( 33%) ] Loss: 0.0633 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3528 top1= 50.7412

Train epoch 27
[E27B0  |    384/60000 (  1%) ] Loss: 0.0979 top1= 96.5625

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.757


[E27B10 |   4224/60000 (  7%) ] Loss: 0.0837 top1= 96.8750
[E27B20 |   8064/60000 ( 13%) ] Loss: 0.0796 top1= 98.1250
[E27B30 |  11904/60000 ( 20%) ] Loss: 0.0548 top1= 98.4375
[E27B40 |  15744/60000 ( 26%) ] Loss: 0.0399 top1= 99.0625
[E27B50 |  19584/60000 ( 33%) ] Loss: 0.1483 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2755 top1= 59.7857

Train epoch 28
[E28B0  |    384/60000 (  1%) ] Loss: 0.4099 top1= 96.2500

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.775


[E28B10 |   4224/60000 (  7%) ] Loss: 0.0855 top1= 98.1250
[E28B20 |   8064/60000 ( 13%) ] Loss: 0.0692 top1= 98.1250
[E28B30 |  11904/60000 ( 20%) ] Loss: 0.1187 top1= 98.4375
[E28B40 |  15744/60000 ( 26%) ] Loss: 0.0652 top1= 98.4375
[E28B50 |  19584/60000 ( 33%) ] Loss: 0.1201 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2426 top1= 54.5974

Train epoch 29
[E29B0  |    384/60000 (  1%) ] Loss: 0.0869 top1= 98.1250

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.776


[E29B10 |   4224/60000 (  7%) ] Loss: 0.0571 top1= 98.4375
[E29B20 |   8064/60000 ( 13%) ] Loss: 0.0835 top1= 98.4375
[E29B30 |  11904/60000 ( 20%) ] Loss: 0.0609 top1= 98.1250
[E29B40 |  15744/60000 ( 26%) ] Loss: 0.0685 top1= 98.4375
[E29B50 |  19584/60000 ( 33%) ] Loss: 0.0699 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2429 top1= 59.8558

Train epoch 30
[E30B0  |    384/60000 (  1%) ] Loss: 0.3163 top1= 97.5000

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.794


[E30B10 |   4224/60000 (  7%) ] Loss: 0.0366 top1= 98.7500
[E30B20 |   8064/60000 ( 13%) ] Loss: 0.1476 top1= 98.7500
[E30B30 |  11904/60000 ( 20%) ] Loss: 0.0478 top1= 98.4375
[E30B40 |  15744/60000 ( 26%) ] Loss: 0.0489 top1= 98.7500
[E30B50 |  19584/60000 ( 33%) ] Loss: 0.0456 top1= 99.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1855 top1= 59.8257

