
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker BitFlippingWorker
=> Add worker BitFlippingWorker

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7f2fb45d7400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([1, 1, 2, 1, 5], device='cuda:0')
Worker 10 has targets: tensor([8, 9, 1, 0, 3], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.000


[E 1B10 |   3872/60000 (  6%) ] Loss: 1.3760 top1= 66.6667
[E 1B20 |   7392/60000 ( 12%) ] Loss: 1.1573 top1= 59.0278
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.7251 top1= 78.4722
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.4762 top1= 85.7639
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.3762 top1= 87.8472

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7772 top1= 72.0052

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.2943 top1= 88.8889

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.060


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.3976 top1= 86.1111
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.4766 top1= 82.2917
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.5118 top1= 82.2917
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.5525 top1= 80.5556
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.4754 top1= 85.0694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8073 top1= 71.8950

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.3404 top1= 89.2361

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.070


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.4186 top1= 85.0694
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.4104 top1= 87.1528
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.2979 top1= 89.2361
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.3474 top1= 88.1944
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.5586 top1= 84.0278

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9459 top1= 67.5982

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.2923 top1= 92.7083

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.062


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.3047 top1= 89.5833
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.4274 top1= 85.0694
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.3419 top1= 88.5417
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.3681 top1= 88.1944
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.2931 top1= 90.9722

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6333 top1= 79.0966

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.1828 top1= 94.7917

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.033


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.2655 top1= 90.2778
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.2197 top1= 90.9722
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.2112 top1= 94.4444
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.1610 top1= 96.5278
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.2406 top1= 90.6250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6680 top1= 75.4207

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.1509 top1= 95.4861

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.039


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.2559 top1= 92.3611
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.2089 top1= 93.4028
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.2083 top1= 91.3194
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.3447 top1= 91.6667
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.2452 top1= 90.9722

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5968 top1= 81.5104

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.1850 top1= 94.7917

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.041


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.2222 top1= 93.0556
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.2413 top1= 92.7083
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.2312 top1= 93.4028
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.2382 top1= 91.6667
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.2609 top1= 92.3611

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4968 top1= 84.9058

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.1398 top1= 95.4861

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.039


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.1938 top1= 93.7500
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.1990 top1= 93.4028
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.1763 top1= 95.8333
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.1329 top1= 97.5694
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.1789 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4392 top1= 87.4900

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.0918 top1= 97.9167

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.026


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.1712 top1= 95.8333
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.1287 top1= 95.4861
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.1381 top1= 96.5278
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.1395 top1= 96.8750
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.2307 top1= 91.6667

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4692 top1= 85.4968

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.1371 top1= 95.4861

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.032


[E10B10 |   3872/60000 (  6%) ] Loss: 0.1627 top1= 95.8333
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.1253 top1= 96.1806
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.1549 top1= 95.4861
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.1416 top1= 95.8333
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.1538 top1= 95.4861

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4346 top1= 87.6603

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 0.1261 top1= 95.8333

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.027


[E11B10 |   3872/60000 (  6%) ] Loss: 0.2099 top1= 93.4028
[E11B20 |   7392/60000 ( 12%) ] Loss: 0.2699 top1= 90.6250
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.1615 top1= 94.7917
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.1416 top1= 96.8750
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.1771 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4522 top1= 86.9992

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 0.1411 top1= 95.8333

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.038


[E12B10 |   3872/60000 (  6%) ] Loss: 0.2490 top1= 90.9722
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.2234 top1= 93.7500
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.1983 top1= 92.3611
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.1755 top1= 95.4861
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.1768 top1= 92.7083

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4246 top1= 87.9207

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.1782 top1= 92.3611

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.026


[E13B10 |   3872/60000 (  6%) ] Loss: 0.1374 top1= 97.5694
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.1108 top1= 96.8750
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.2127 top1= 92.0139
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.2601 top1= 90.2778
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.2132 top1= 92.3611

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4637 top1= 86.9591

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.1839 top1= 94.7917

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.045


[E14B10 |   3872/60000 (  6%) ] Loss: 0.1888 top1= 93.7500
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.2043 top1= 92.3611
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.1806 top1= 94.4444
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.2098 top1= 93.0556
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.1075 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4639 top1= 85.8774

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.1680 top1= 95.1389

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.030


[E15B10 |   3872/60000 (  6%) ] Loss: 0.2070 top1= 92.7083
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.1863 top1= 94.0972
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.1846 top1= 94.0972
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.1355 top1= 95.1389
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.0881 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3406 top1= 90.7151

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.1510 top1= 94.7917

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.024


[E16B10 |   3872/60000 (  6%) ] Loss: 0.1131 top1= 96.1806
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.1346 top1= 96.1806
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.1441 top1= 94.7917
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.1358 top1= 96.8750
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.0916 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3584 top1= 90.3746

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.0923 top1= 96.1806

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.026


[E17B10 |   3872/60000 (  6%) ] Loss: 0.1266 top1= 95.8333
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.1577 top1= 94.7917
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.2912 top1= 92.7083
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.1097 top1= 97.5694
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.1132 top1= 96.5278

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3735 top1= 89.6534

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.1497 top1= 96.1806

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.026


[E18B10 |   3872/60000 (  6%) ] Loss: 0.1821 top1= 95.1389
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.1293 top1= 94.4444
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.1068 top1= 95.8333
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.1002 top1= 96.5278
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.1108 top1= 96.5278

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4280 top1= 86.3782

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.1676 top1= 94.7917

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.029


[E19B10 |   3872/60000 (  6%) ] Loss: 0.1561 top1= 95.8333
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.1606 top1= 95.1389
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.2480 top1= 92.3611
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.1445 top1= 94.4444
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.1172 top1= 96.8750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4553 top1= 84.9259

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.1394 top1= 95.4861

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.028


[E20B10 |   3872/60000 (  6%) ] Loss: 0.1955 top1= 93.0556
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.2225 top1= 93.4028
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.1855 top1= 93.4028
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.1514 top1= 96.5278
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.1377 top1= 95.8333

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4171 top1= 87.5100

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.1411 top1= 95.8333

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.032


[E21B10 |   3872/60000 (  6%) ] Loss: 0.2283 top1= 94.7917
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.1618 top1= 94.4444
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.2419 top1= 90.2778
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.1922 top1= 93.7500
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.1620 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3612 top1= 90.3245

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.1900 top1= 94.4444

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.038


[E22B10 |   3872/60000 (  6%) ] Loss: 0.2691 top1= 92.0139
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.2300 top1= 91.6667
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.1797 top1= 93.7500
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.1204 top1= 97.2222
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.0998 top1= 97.5694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4273 top1= 87.4800

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.1631 top1= 93.0556

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.037


[E23B10 |   3872/60000 (  6%) ] Loss: 0.2240 top1= 93.7500
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.1523 top1= 94.4444
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.1503 top1= 95.1389
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.0987 top1= 97.2222
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.1141 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3707 top1= 88.6619

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.1361 top1= 94.7917

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.029


[E24B10 |   3872/60000 (  6%) ] Loss: 0.1662 top1= 95.1389
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.1092 top1= 95.8333
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.1288 top1= 96.5278
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.0825 top1= 97.2222
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.1071 top1= 96.8750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2912 top1= 90.9956

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.0660 top1= 98.6111

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.021


[E25B10 |   3872/60000 (  6%) ] Loss: 0.0789 top1= 98.6111
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.0592 top1= 97.2222
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.0863 top1= 97.5694
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.0808 top1= 98.6111
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.1088 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2990 top1= 91.2760

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.1132 top1= 96.8750

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.023


[E26B10 |   3872/60000 (  6%) ] Loss: 0.1704 top1= 97.5694
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.1113 top1= 96.1806
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.1119 top1= 96.8750
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.1004 top1= 97.5694
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.1192 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2629 top1= 91.9972

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.0957 top1= 97.2222

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.021


[E27B10 |   3872/60000 (  6%) ] Loss: 0.0954 top1= 97.9167
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.0547 top1= 98.6111
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.1282 top1= 96.1806
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.0904 top1= 96.5278
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.0858 top1= 96.8750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2744 top1= 92.1875

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.0792 top1= 97.5694

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.024


[E28B10 |   3872/60000 (  6%) ] Loss: 0.0992 top1= 96.8750
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.1183 top1= 96.8750
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.1137 top1= 95.4861
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.0687 top1= 98.9583
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.1106 top1= 95.4861

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3391 top1= 89.9639

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.1405 top1= 95.8333

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.028


[E29B10 |   3872/60000 (  6%) ] Loss: 0.1509 top1= 96.5278
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.1349 top1= 95.1389
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.0889 top1= 97.2222
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.1399 top1= 95.4861
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.1339 top1= 95.8333

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2759 top1= 92.3578

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.0850 top1= 97.5694

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.023


[E30B10 |   3872/60000 (  6%) ] Loss: 0.1112 top1= 96.5278
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.1092 top1= 96.8750
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.1212 top1= 95.1389
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.0832 top1= 97.9167
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.1400 top1= 96.1806

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3007 top1= 91.5966

