
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker LabelFlippingWorker
=> Add worker LabelFlippingWorker

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7fd18395b400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([8, 8, 7, 8, 4], device='cuda:0')
Worker 10 has targets: tensor([1, 0, 8, 9, 6], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.000


[E 1B10 |   3872/60000 (  6%) ] Loss: 1.3047 top1= 69.0972
[E 1B20 |   7392/60000 ( 12%) ] Loss: 0.9179 top1= 67.3611
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.6108 top1= 82.2917
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.4386 top1= 86.1111
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.3712 top1= 89.2361

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8199 top1= 73.1270

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.4000 top1= 86.4583

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.044


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.5712 top1= 78.8194
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.5374 top1= 82.9861
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.4484 top1= 86.8056
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.4069 top1= 85.7639
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.5182 top1= 85.0694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9189 top1= 72.7364

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.3281 top1= 92.0139

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.047


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.3384 top1= 87.8472
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.5655 top1= 84.0278
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.4238 top1= 87.1528
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.4403 top1= 86.8056
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.4361 top1= 86.1111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7976 top1= 75.6410

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.3646 top1= 88.8889

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.040


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.7056 top1= 80.9028
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.8426 top1= 78.4722
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.4287 top1= 86.4583
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.3691 top1= 89.5833
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.5407 top1= 84.7222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9186 top1= 75.4407

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.4194 top1= 88.8889

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.040


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.4131 top1= 87.1528
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.6538 top1= 79.8611
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.4641 top1= 86.1111
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.4197 top1= 88.1944
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.6600 top1= 82.2917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0697 top1= 67.4679

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.4638 top1= 83.6806

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.048


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.3816 top1= 87.5000
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.6501 top1= 78.4722
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.5704 top1= 83.3333
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.4256 top1= 87.8472
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.5117 top1= 82.9861

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9535 top1= 71.6647

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.5792 top1= 81.9444

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.039


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.5165 top1= 84.7222
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.7201 top1= 79.1667
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.4406 top1= 87.8472
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.5985 top1= 84.7222
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.3373 top1= 89.9306

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0283 top1= 71.2039

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.5214 top1= 85.4167

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.031


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.4049 top1= 87.8472
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.6748 top1= 79.5139
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.4792 top1= 84.7222
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.5860 top1= 82.9861
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.4154 top1= 86.4583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9616 top1= 73.3574

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.5371 top1= 85.4167

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.032


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.5004 top1= 85.4167
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.6749 top1= 80.5556
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.5203 top1= 82.9861
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.5916 top1= 83.6806
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.4145 top1= 87.8472

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9779 top1= 77.1835

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.4350 top1= 85.7639

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.030


[E10B10 |   3872/60000 (  6%) ] Loss: 0.4539 top1= 83.3333
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.6610 top1= 78.8194
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.4409 top1= 84.7222
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.3961 top1= 86.4583
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.3749 top1= 88.1944

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8748 top1= 76.3722

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 0.3842 top1= 86.4583

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.026


[E11B10 |   3872/60000 (  6%) ] Loss: 0.4366 top1= 84.7222
[E11B20 |   7392/60000 ( 12%) ] Loss: 0.4188 top1= 85.0694
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.3794 top1= 86.1111
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.4359 top1= 86.1111
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.3678 top1= 88.5417

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8112 top1= 80.1983

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 0.3601 top1= 86.8056

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.024


[E12B10 |   3872/60000 (  6%) ] Loss: 0.3705 top1= 84.3750
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.5793 top1= 78.4722
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.4023 top1= 87.1528
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.4154 top1= 86.1111
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.4260 top1= 85.4167

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8338 top1= 77.3938

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.4130 top1= 87.1528

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.031


[E13B10 |   3872/60000 (  6%) ] Loss: 0.4282 top1= 85.0694
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.4539 top1= 85.0694
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.3315 top1= 88.5417
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.4648 top1= 85.4167
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.4083 top1= 86.8056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8493 top1= 78.0048

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.3811 top1= 87.5000

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.034


[E14B10 |   3872/60000 (  6%) ] Loss: 0.4324 top1= 83.3333
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.4934 top1= 84.3750
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.3106 top1= 89.9306
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.4393 top1= 88.5417
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.4341 top1= 86.1111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7974 top1= 80.9095

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.3155 top1= 89.2361

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.025


[E15B10 |   3872/60000 (  6%) ] Loss: 0.3094 top1= 87.8472
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.7107 top1= 77.4306
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.4066 top1= 86.8056
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.3450 top1= 87.5000
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.3282 top1= 88.8889

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8611 top1= 75.4708

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.3520 top1= 87.8472

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.026


[E16B10 |   3872/60000 (  6%) ] Loss: 0.2976 top1= 89.2361
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.3551 top1= 86.4583
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.3135 top1= 89.2361
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.2926 top1= 90.6250
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.2304 top1= 92.7083

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8458 top1= 74.5793

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.2993 top1= 90.2778

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.023


[E17B10 |   3872/60000 (  6%) ] Loss: 0.2594 top1= 91.6667
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.3432 top1= 86.8056
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.3800 top1= 86.8056
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.3118 top1= 88.5417
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.3330 top1= 88.5417

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8316 top1= 78.3253

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.3554 top1= 87.8472

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.020


[E18B10 |   3872/60000 (  6%) ] Loss: 0.3021 top1= 91.3194
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.3560 top1= 87.1528
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.3702 top1= 85.0694
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.3088 top1= 89.5833
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.2518 top1= 91.6667

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9317 top1= 74.5192

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.4019 top1= 85.7639

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.021


[E19B10 |   3872/60000 (  6%) ] Loss: 0.3945 top1= 88.8889
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.3574 top1= 90.2778
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.3178 top1= 88.8889
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.3317 top1= 88.5417
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.2685 top1= 91.3194

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7946 top1= 79.7676

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.3955 top1= 86.8056

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.025


[E20B10 |   3872/60000 (  6%) ] Loss: 0.3206 top1= 89.2361
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.4540 top1= 86.1111
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.3786 top1= 88.5417
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.4395 top1= 83.6806
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.3490 top1= 87.8472

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8486 top1= 77.4439

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.4674 top1= 86.8056

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.027


[E21B10 |   3872/60000 (  6%) ] Loss: 0.3653 top1= 87.1528
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.3596 top1= 86.8056
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.3418 top1= 90.9722
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.3237 top1= 89.5833
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.2969 top1= 91.3194

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8861 top1= 76.6426

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.5464 top1= 86.1111

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.026


[E22B10 |   3872/60000 (  6%) ] Loss: 0.3343 top1= 88.1944
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.3550 top1= 86.4583
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.3285 top1= 86.8056
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.2361 top1= 92.3611
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.2540 top1= 90.2778

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7901 top1= 76.5325

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.3107 top1= 91.6667

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.023


[E23B10 |   3872/60000 (  6%) ] Loss: 0.2632 top1= 92.3611
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.3989 top1= 86.4583
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.3905 top1= 88.8889
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.2386 top1= 92.7083
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.2871 top1= 91.6667

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7481 top1= 79.0565

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.3194 top1= 90.2778

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.022


[E24B10 |   3872/60000 (  6%) ] Loss: 0.3229 top1= 88.8889
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.3116 top1= 90.6250
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.3087 top1= 88.5417
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.2053 top1= 93.7500
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.3996 top1= 87.1528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8025 top1= 77.6442

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.3896 top1= 85.4167

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.019


[E25B10 |   3872/60000 (  6%) ] Loss: 0.4166 top1= 87.5000
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.4214 top1= 87.1528
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.3949 top1= 85.0694
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.2613 top1= 92.3611
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.2847 top1= 89.5833

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8502 top1= 77.9247

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.4052 top1= 86.4583

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.019


[E26B10 |   3872/60000 (  6%) ] Loss: 0.3189 top1= 90.6250
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.3160 top1= 89.9306
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.2860 top1= 90.2778
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.2615 top1= 90.6250
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.2957 top1= 91.6667

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7790 top1= 76.9331

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.3559 top1= 87.5000

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.022


[E27B10 |   3872/60000 (  6%) ] Loss: 0.3640 top1= 86.4583
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.4787 top1= 85.7639
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.3248 top1= 88.5417
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.3678 top1= 89.2361
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.3286 top1= 87.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8423 top1= 75.8013

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.3608 top1= 87.1528

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.024


[E28B10 |   3872/60000 (  6%) ] Loss: 0.2779 top1= 89.2361
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.3326 top1= 91.3194
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.2608 top1= 89.9306
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.3149 top1= 89.2361
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.3716 top1= 89.9306

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8858 top1= 70.2224

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.3756 top1= 88.1944

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.022


[E29B10 |   3872/60000 (  6%) ] Loss: 0.2624 top1= 93.0556
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.3144 top1= 91.6667
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.3450 top1= 87.8472
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.2325 top1= 90.2778
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.2841 top1= 91.3194

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7372 top1= 79.9379

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.2261 top1= 92.3611

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.017


[E30B10 |   3872/60000 (  6%) ] Loss: 0.3933 top1= 88.5417
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.3763 top1= 86.1111
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.3269 top1= 89.2361
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.2176 top1= 94.4444
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.2754 top1= 91.3194

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7768 top1= 76.7528

