
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker ByzantineWorker(index=9)
=> Add worker ByzantineWorker(index=10)

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7f3080223400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([1, 1, 2, 1, 5], device='cuda:0')
Worker 10 has targets: tensor([8, 9, 1, 0, 3], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.015


[E 1B10 |   3872/60000 (  6%) ] Loss: 0.7751 top1= 76.3889
[E 1B20 |   7392/60000 ( 12%) ] Loss: 0.6672 top1= 74.6528
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.5048 top1= 82.6389
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.3323 top1= 90.2778
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.2556 top1= 93.4028

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.5346 top1= 54.8077

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.3183 top1= 90.6250

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.336


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.1319 top1= 96.1806
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.3064 top1= 87.5000
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.5242 top1= 89.5833
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.2578 top1= 95.4861
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.3743 top1= 95.4861

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4617 top1= 55.4287

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.1789 top1= 97.9167

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.424


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.2361 top1= 91.6667
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.4876 top1= 84.7222
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.0851 top1= 97.9167
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.1349 top1= 96.1806
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.3119 top1= 89.5833

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3551 top1= 58.6939

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.2159 top1= 92.3611

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.458


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.2756 top1= 91.6667
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.4617 top1= 89.9306
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.1474 top1= 96.5278
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.2022 top1= 94.4444
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.0931 top1= 97.5694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2094 top1= 62.4499

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.0658 top1= 98.9583

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.553


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.0723 top1= 97.9167
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.0628 top1= 97.9167
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.0881 top1= 97.9167
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.0645 top1= 97.9167
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.4875 top1= 91.3194

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1512 top1= 63.3614

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.2185 top1= 92.3611

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.508


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.1349 top1= 96.1806
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.5536 top1= 81.5972
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.9559 top1= 87.5000
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.4723 top1= 95.8333
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.1102 top1= 96.5278

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0026 top1= 71.3041

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.1085 top1= 97.5694

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.519


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.2717 top1= 91.3194
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.1724 top1= 94.4444
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.1173 top1= 94.4444
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.1421 top1= 95.1389
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.0777 top1= 97.9167

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0371 top1= 70.6530

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.1188 top1= 97.5694

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.570


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.1355 top1= 95.8333
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.2437 top1= 93.7500
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.5401 top1= 92.7083
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.7027 top1= 94.7917
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.1596 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0654 top1= 65.5349

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.2692 top1= 97.9167

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.636


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.4014 top1= 95.1389
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.2418 top1= 90.9722
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.2626 top1= 93.0556
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.1558 top1= 96.5278
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.0572 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0008 top1= 68.5096

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.0656 top1= 98.2639

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.675


[E10B10 |   3872/60000 (  6%) ] Loss: 0.0765 top1= 97.2222
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.3041 top1= 93.7500
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.1138 top1= 95.8333
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.4104 top1= 92.3611
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.1839 top1= 94.0972

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9651 top1= 70.3526

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 0.1481 top1= 96.1806

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.733


[E11B10 |   3872/60000 (  6%) ] Loss: 0.0773 top1= 97.5694
[E11B20 |   7392/60000 ( 12%) ] Loss: 0.0952 top1= 96.8750
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.1117 top1= 94.7917
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.0475 top1= 98.9583
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.0533 top1= 97.9167

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9700 top1= 68.3393

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 0.0547 top1= 98.2639

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.776


[E12B10 |   3872/60000 (  6%) ] Loss: 0.0773 top1= 97.2222
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.1002 top1= 96.5278
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.2049 top1= 95.4861
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.0932 top1= 97.5694
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.0981 top1= 96.1806

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9918 top1= 66.9471

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.0431 top1= 98.6111

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.803


[E13B10 |   3872/60000 (  6%) ] Loss: 0.0849 top1= 96.5278
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.0372 top1= 98.9583
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.1964 top1= 95.1389
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.0668 top1= 97.9167
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.5300 top1= 96.1806

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9537 top1= 67.4079

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.0370 top1= 98.6111

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.846


[E14B10 |   3872/60000 (  6%) ] Loss: 0.0623 top1= 97.9167
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.1329 top1= 95.4861
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.2667 top1= 92.7083
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.0831 top1= 97.2222
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.0315 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9299 top1= 70.3025

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.0418 top1= 98.9583

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.884


[E15B10 |   3872/60000 (  6%) ] Loss: 0.0387 top1= 99.3056
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.0547 top1= 98.6111
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.1273 top1= 97.2222
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.0243 top1= 99.3056
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.0458 top1= 98.2639

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0151 top1= 62.4700

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.0523 top1= 98.2639

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.921


[E16B10 |   3872/60000 (  6%) ] Loss: 0.0323 top1= 98.9583
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.0507 top1= 98.6111
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.0425 top1= 98.9583
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.0467 top1= 98.2639
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.0214 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9855 top1= 64.6735

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.0320 top1= 99.3056

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.929


[E17B10 |   3872/60000 (  6%) ] Loss: 0.0505 top1= 98.6111
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.0323 top1= 98.9583
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.0527 top1= 98.9583
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.0200 top1= 99.6528
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.0237 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9077 top1= 70.8333

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.0607 top1= 97.9167

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.961


[E18B10 |   3872/60000 (  6%) ] Loss: 0.0448 top1= 98.6111
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.0416 top1= 98.9583
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.0558 top1= 98.2639
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.0145 top1=100.0000
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.0229 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9477 top1= 66.4263

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.0407 top1= 98.2639

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.988


[E19B10 |   3872/60000 (  6%) ] Loss: 0.0389 top1= 98.6111
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.0547 top1= 97.5694
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.0555 top1= 98.2639
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.0242 top1= 99.3056
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.0689 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9662 top1= 66.4964

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.0351 top1= 99.3056

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.896


[E20B10 |   3872/60000 (  6%) ] Loss: 0.0718 top1= 97.2222
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.3125 top1= 94.7917
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.1019 top1= 98.6111
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.2796 top1= 94.0972
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.1813 top1= 94.0972

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0846 top1= 64.9339

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.2499 top1= 92.3611

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.604


[E21B10 |   3872/60000 (  6%) ] Loss: 0.1142 top1= 97.2222
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.1122 top1= 97.5694
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.6439 top1= 86.8056
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.0656 top1= 97.9167
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.0664 top1= 98.2639

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8855 top1= 72.8566

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.0551 top1= 98.9583

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.645


[E22B10 |   3872/60000 (  6%) ] Loss: 0.0682 top1= 98.6111
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.0636 top1= 97.5694
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.3399 top1= 95.1389
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.0793 top1= 97.2222
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.0550 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8795 top1= 71.4543

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.0608 top1= 98.6111

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.669


[E23B10 |   3872/60000 (  6%) ] Loss: 0.0734 top1= 98.2639
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.1026 top1= 96.8750
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.0659 top1= 97.5694
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.0487 top1= 97.9167
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.0418 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8587 top1= 73.2171

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.0415 top1= 98.6111

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.697


[E24B10 |   3872/60000 (  6%) ] Loss: 0.0811 top1= 97.2222
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.0807 top1= 96.8750
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.0938 top1= 96.5278
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.0279 top1= 98.6111
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.0338 top1= 98.9583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9074 top1= 69.9419

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.0757 top1= 97.9167

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.704


[E25B10 |   3872/60000 (  6%) ] Loss: 0.0733 top1= 97.9167
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.0948 top1= 96.5278
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.4158 top1= 96.1806
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.1129 top1= 96.5278
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.0421 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8640 top1= 72.7264

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.0259 top1= 99.6528

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.731


[E26B10 |   3872/60000 (  6%) ] Loss: 0.0524 top1= 98.2639
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.0409 top1= 98.6111
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.0422 top1= 98.2639
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.0436 top1= 99.3056
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.0371 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8928 top1= 70.6330

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.0410 top1= 99.3056

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.755


[E27B10 |   3872/60000 (  6%) ] Loss: 0.0985 top1= 96.1806
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.0606 top1= 98.6111
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.0436 top1= 98.6111
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.0418 top1= 98.6111
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.1091 top1= 96.8750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8456 top1= 72.5361

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.0333 top1= 98.9583

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.774


[E28B10 |   3872/60000 (  6%) ] Loss: 0.0710 top1= 97.2222
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.0492 top1= 98.2639
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.0536 top1= 98.9583
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.0733 top1= 97.9167
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.1170 top1= 96.5278

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8361 top1= 73.5377

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.0274 top1= 99.3056

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.800


[E29B10 |   3872/60000 (  6%) ] Loss: 0.0845 top1= 96.5278
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.1957 top1= 95.8333
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.0336 top1= 98.9583
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.0859 top1= 97.2222
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.0364 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8569 top1= 72.1354

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.0273 top1= 99.6528

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.813


[E30B10 |   3872/60000 (  6%) ] Loss: 0.0965 top1= 95.4861
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.0242 top1= 99.6528
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.1338 top1= 95.8333
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.0338 top1= 98.6111
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.0356 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8278 top1= 74.1687

