
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker ByzantineWorker(index=9)
=> Add worker ByzantineWorker(index=10)

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7f3d84e3c400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([1, 1, 2, 1, 5], device='cuda:0')
Worker 10 has targets: tensor([8, 9, 1, 0, 3], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.457


[E 1B10 |   3872/60000 (  6%) ] Loss: 1.7842 top1= 53.1250
[E 1B20 |   7392/60000 ( 12%) ] Loss: 0.8665 top1= 81.5972
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.4084 top1= 89.9306
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.2784 top1= 94.7917
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.2767 top1= 95.4861

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1452 top1= 62.6202

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.2319 top1= 98.2639

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.064


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.3264 top1= 93.0556
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.2802 top1= 95.4861
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.2655 top1= 95.8333
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.2326 top1= 96.5278
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.2612 top1= 95.4861

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1566 top1= 62.7905

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.2104 top1= 98.2639

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.052


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.3030 top1= 93.0556
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.2617 top1= 96.1806
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.2520 top1= 96.1806
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.2189 top1= 96.1806
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.2569 top1= 95.4861

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1672 top1= 62.8906

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.2040 top1= 98.6111

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.051


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.2926 top1= 93.7500
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.2525 top1= 97.2222
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.2430 top1= 96.5278
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.2122 top1= 96.1806
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.2563 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1738 top1= 63.2712

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.2023 top1= 98.2639

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.050


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.2883 top1= 94.7917
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.2485 top1= 97.9167
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.2389 top1= 96.5278
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.2087 top1= 96.5278
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.2548 top1= 95.4861

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1765 top1= 63.5116

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.2018 top1= 98.2639

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.050


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.2878 top1= 94.7917
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.2459 top1= 97.9167
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.2360 top1= 96.5278
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.2068 top1= 96.8750
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.2530 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1782 top1= 63.6118

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.2015 top1= 98.2639

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.050


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.2870 top1= 94.7917
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.2444 top1= 97.9167
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.2336 top1= 96.8750
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.2052 top1= 96.8750
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.2505 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1778 top1= 63.7019

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.2017 top1= 98.2639

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.049


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.2868 top1= 94.7917
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.2431 top1= 97.9167
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.2316 top1= 96.8750
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.2031 top1= 96.8750
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.2487 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1775 top1= 63.8522

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.2016 top1= 98.2639

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.049


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.2861 top1= 94.4444
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.2432 top1= 98.2639
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.2306 top1= 96.8750
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.2022 top1= 96.8750
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.2476 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1775 top1= 63.9123

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.2013 top1= 98.2639

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.049


[E10B10 |   3872/60000 (  6%) ] Loss: 0.2859 top1= 94.4444
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.2433 top1= 98.2639
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.2293 top1= 96.8750
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.2017 top1= 97.2222
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.2467 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1782 top1= 63.9724

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 0.2013 top1= 98.2639

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.049


[E11B10 |   3872/60000 (  6%) ] Loss: 0.2859 top1= 94.4444
[E11B20 |   7392/60000 ( 12%) ] Loss: 0.2429 top1= 98.2639
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.2284 top1= 96.5278
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.2013 top1= 97.2222
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.2464 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1783 top1= 64.0825

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 0.2013 top1= 98.2639

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.049


[E12B10 |   3872/60000 (  6%) ] Loss: 0.2857 top1= 94.4444
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.2426 top1= 98.6111
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.2276 top1= 96.5278
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.2010 top1= 96.8750
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.2451 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1789 top1= 64.0425

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.2013 top1= 98.2639

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.049


[E13B10 |   3872/60000 (  6%) ] Loss: 0.2840 top1= 94.7917
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.2431 top1= 98.6111
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.2271 top1= 96.5278
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.2007 top1= 96.8750
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.2451 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1784 top1= 64.2328

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.2017 top1= 98.2639

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.049


[E14B10 |   3872/60000 (  6%) ] Loss: 0.2840 top1= 94.7917
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.2428 top1= 98.2639
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.2261 top1= 96.5278
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.2004 top1= 97.2222
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.2444 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1784 top1= 64.3229

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.2020 top1= 98.2639

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.049


[E15B10 |   3872/60000 (  6%) ] Loss: 0.2830 top1= 94.7917
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.2427 top1= 98.6111
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.2255 top1= 96.5278
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.2000 top1= 97.2222
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.2440 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1784 top1= 64.3029

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.2021 top1= 98.2639

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.049


[E16B10 |   3872/60000 (  6%) ] Loss: 0.2820 top1= 94.7917
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.2429 top1= 98.6111
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.2251 top1= 96.5278
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.2000 top1= 97.2222
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.2439 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1781 top1= 64.3530

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.2021 top1= 97.9167

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.049


[E17B10 |   3872/60000 (  6%) ] Loss: 0.2818 top1= 94.7917
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.2433 top1= 98.6111
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.2246 top1= 96.5278
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.1997 top1= 97.2222
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.2427 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1775 top1= 64.4732

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.2020 top1= 97.9167

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.049


[E18B10 |   3872/60000 (  6%) ] Loss: 0.2805 top1= 94.7917
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.2432 top1= 98.2639
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.2244 top1= 96.1806
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.1992 top1= 97.2222
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.2421 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1778 top1= 64.4131

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.2022 top1= 97.9167

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.049


[E19B10 |   3872/60000 (  6%) ] Loss: 0.2795 top1= 95.1389
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.2436 top1= 98.6111
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.2245 top1= 96.1806
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.1994 top1= 97.2222
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.2420 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1776 top1= 64.5132

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.2020 top1= 97.9167

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.048


[E20B10 |   3872/60000 (  6%) ] Loss: 0.2777 top1= 95.1389
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.2434 top1= 98.6111
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.2246 top1= 96.1806
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.1992 top1= 97.2222
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.2408 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1774 top1= 64.5533

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.2017 top1= 97.9167

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.048


[E21B10 |   3872/60000 (  6%) ] Loss: 0.2766 top1= 95.4861
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.2434 top1= 98.6111
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.2243 top1= 96.5278
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.1991 top1= 97.2222
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.2398 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1770 top1= 64.6034

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.2016 top1= 98.2639

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.048


[E22B10 |   3872/60000 (  6%) ] Loss: 0.2759 top1= 94.7917
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.2429 top1= 98.2639
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.2242 top1= 96.8750
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.1990 top1= 97.2222
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.2387 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1772 top1= 64.7135

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.2015 top1= 98.2639

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.048


[E23B10 |   3872/60000 (  6%) ] Loss: 0.2752 top1= 94.7917
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.2428 top1= 98.6111
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.2241 top1= 96.8750
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.1988 top1= 97.2222
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.2382 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1772 top1= 64.8037

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.2013 top1= 98.2639

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.048


[E24B10 |   3872/60000 (  6%) ] Loss: 0.2745 top1= 94.7917
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.2426 top1= 98.6111
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.2239 top1= 96.8750
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.1984 top1= 97.5694
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.2373 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1773 top1= 64.7837

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.2013 top1= 98.2639

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.048


[E25B10 |   3872/60000 (  6%) ] Loss: 0.2739 top1= 95.1389
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.2424 top1= 98.6111
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.2241 top1= 96.8750
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.1983 top1= 97.5694
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.2365 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1776 top1= 64.7837

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.2013 top1= 98.2639

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.048


[E26B10 |   3872/60000 (  6%) ] Loss: 0.2733 top1= 94.7917
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.2421 top1= 98.9583
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.2242 top1= 96.8750
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.1983 top1= 97.5694
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.2362 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1776 top1= 64.7035

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.2014 top1= 98.2639

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.048


[E27B10 |   3872/60000 (  6%) ] Loss: 0.2725 top1= 95.1389
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.2419 top1= 98.9583
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.2240 top1= 97.2222
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.1982 top1= 97.5694
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.2357 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1767 top1= 64.8337

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.2016 top1= 98.2639

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.048


[E28B10 |   3872/60000 (  6%) ] Loss: 0.2722 top1= 94.7917
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.2417 top1= 98.6111
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.2242 top1= 97.2222
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.1977 top1= 97.5694
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.2350 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1759 top1= 64.8738

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.2018 top1= 98.2639

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.048


[E29B10 |   3872/60000 (  6%) ] Loss: 0.2715 top1= 95.1389
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.2419 top1= 98.9583
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.2243 top1= 97.2222
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.1976 top1= 97.5694
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.2348 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1752 top1= 64.9239

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.2020 top1= 98.2639

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.048


[E30B10 |   3872/60000 (  6%) ] Loss: 0.2705 top1= 95.4861
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.2419 top1= 98.6111
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.2241 top1= 97.2222
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.1973 top1= 97.5694
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.2344 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1753 top1= 64.9740

