
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker SGDMWorker(index=9, momentum=0.9)
=> Add worker BitFlippingWorker
=> Add worker BitFlippingWorker

=== Start adding graph ===
<codes.graph_utils.RandomSmallWorldGraph object at 0x7f36c56df400>

Train epoch 1
[E 1B0  |    384/60000 (  1%) ] Loss: 2.3054 top1= 10.0000

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([1, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([2, 3, 3, 3, 3], device='cuda:0')
Worker 4 has targets: tensor([3, 4, 4, 4, 4], device='cuda:0')
Worker 5 has targets: tensor([4, 5, 5, 5, 5], device='cuda:0')
Worker 6 has targets: tensor([6, 6, 6, 6, 6], device='cuda:0')
Worker 7 has targets: tensor([7, 7, 7, 7, 7], device='cuda:0')
Worker 8 has targets: tensor([7, 8, 8, 8, 8], device='cuda:0')
Worker 9 has targets: tensor([8, 9, 9, 9, 9], device='cuda:0')
Worker 10 has targets: tensor([4, 8, 8, 6, 9], device='cuda:0')
Worker 11 has targets: tensor([5, 3, 6, 0, 9], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.802



=== Log average shortest path distance for small world @ E1B0 ===
2.7777777777777777


[E 1B10 |   4224/60000 (  7%) ] Loss: 0.6255 top1= 74.3750
[E 1B20 |   8064/60000 ( 13%) ] Loss: 0.2064 top1= 96.5625
[E 1B30 |  11904/60000 ( 20%) ] Loss: 0.8950 top1= 85.0000
[E 1B40 |  15744/60000 ( 26%) ] Loss: 0.7244 top1= 87.8125
[E 1B50 |  19584/60000 ( 33%) ] Loss: 0.2085 top1= 93.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8740 top1= 39.7436

Train epoch 2
[E 2B0  |    384/60000 (  1%) ] Loss: 0.3943 top1= 89.3750

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.341


[E 2B10 |   4224/60000 (  7%) ] Loss: 1.4052 top1= 82.8125
[E 2B20 |   8064/60000 ( 13%) ] Loss: 0.1723 top1= 96.2500
[E 2B30 |  11904/60000 ( 20%) ] Loss: 0.1956 top1= 95.3125
[E 2B40 |  15744/60000 ( 26%) ] Loss: 0.0830 top1= 97.5000
[E 2B50 |  19584/60000 ( 33%) ] Loss: 0.0752 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8103 top1= 41.0958

Train epoch 3
[E 3B0  |    384/60000 (  1%) ] Loss: 0.4001 top1= 88.4375

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.409


[E 3B10 |   4224/60000 (  7%) ] Loss: 0.1119 top1= 95.0000
[E 3B20 |   8064/60000 ( 13%) ] Loss: 0.1363 top1= 98.1250
[E 3B30 |  11904/60000 ( 20%) ] Loss: 0.1408 top1= 99.3750
[E 3B40 |  15744/60000 ( 26%) ] Loss: 0.1167 top1= 96.8750
[E 3B50 |  19584/60000 ( 33%) ] Loss: 0.0914 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7517 top1= 40.9956

Train epoch 4
[E 4B0  |    384/60000 (  1%) ] Loss: 0.1861 top1= 94.6875

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.438


[E 4B10 |   4224/60000 (  7%) ] Loss: 0.1746 top1= 94.6875
[E 4B20 |   8064/60000 ( 13%) ] Loss: 0.3282 top1= 86.2500
[E 4B30 |  11904/60000 ( 20%) ] Loss: 0.2010 top1= 95.0000
[E 4B40 |  15744/60000 ( 26%) ] Loss: 0.1359 top1= 95.0000
[E 4B50 |  19584/60000 ( 33%) ] Loss: 0.2059 top1= 93.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.6780 top1= 43.9603

Train epoch 5
[E 5B0  |    384/60000 (  1%) ] Loss: 0.2446 top1= 94.6875

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.491


[E 5B10 |   4224/60000 (  7%) ] Loss: 0.2543 top1= 88.1250
[E 5B20 |   8064/60000 ( 13%) ] Loss: 0.2515 top1= 91.5625
[E 5B30 |  11904/60000 ( 20%) ] Loss: 0.2848 top1= 97.5000
[E 5B40 |  15744/60000 ( 26%) ] Loss: 0.3755 top1= 95.6250
[E 5B50 |  19584/60000 ( 33%) ] Loss: 0.3045 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8525 top1= 43.8101

Train epoch 6
[E 6B0  |    384/60000 (  1%) ] Loss: 0.5558 top1= 89.0625

=== Log global consensus distance @ E6B0 ===
consensus_distance=7.733


[E 6B10 |   4224/60000 (  7%) ] Loss: 0.4385 top1= 90.6250
[E 6B20 |   8064/60000 ( 13%) ] Loss: 0.3835 top1= 95.3125
[E 6B30 |  11904/60000 ( 20%) ] Loss: 0.3244 top1= 95.0000
[E 6B40 |  15744/60000 ( 26%) ] Loss: 0.4251 top1= 92.1875
[E 6B50 |  19584/60000 ( 33%) ] Loss: 0.2756 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8655 top1= 41.6066

Train epoch 7
[E 7B0  |    384/60000 (  1%) ] Loss: 0.5545 top1= 94.3750

=== Log global consensus distance @ E7B0 ===
consensus_distance=12.582


[E 7B10 |   4224/60000 (  7%) ] Loss: 0.5053 top1= 98.1250
[E 7B20 |   8064/60000 ( 13%) ] Loss: 0.5039 top1= 97.5000
[E 7B30 |  11904/60000 ( 20%) ] Loss: 0.4643 top1= 98.1250
[E 7B40 |  15744/60000 ( 26%) ] Loss: 0.4602 top1= 98.7500
[E 7B50 |  19584/60000 ( 33%) ] Loss: 0.4568 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8920 top1= 42.0873

Train epoch 8
[E 8B0  |    384/60000 (  1%) ] Loss: 0.5145 top1= 93.1250

=== Log global consensus distance @ E8B0 ===
consensus_distance=13.479


[E 8B10 |   4224/60000 (  7%) ] Loss: 0.4012 top1= 98.7500
[E 8B20 |   8064/60000 ( 13%) ] Loss: 0.4141 top1= 98.1250
[E 8B30 |  11904/60000 ( 20%) ] Loss: 0.3894 top1= 98.1250
[E 8B40 |  15744/60000 ( 26%) ] Loss: 0.3867 top1= 98.7500
[E 8B50 |  19584/60000 ( 33%) ] Loss: 0.3854 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8784 top1= 42.5781

Train epoch 9
[E 9B0  |    384/60000 (  1%) ] Loss: 0.4254 top1= 95.0000

=== Log global consensus distance @ E9B0 ===
consensus_distance=13.686


[E 9B10 |   4224/60000 (  7%) ] Loss: 0.3386 top1= 99.0625
[E 9B20 |   8064/60000 ( 13%) ] Loss: 0.3269 top1= 98.1250
[E 9B30 |  11904/60000 ( 20%) ] Loss: 0.3148 top1= 99.3750
[E 9B40 |  15744/60000 ( 26%) ] Loss: 0.3784 top1= 96.8750
[E 9B50 |  19584/60000 ( 33%) ] Loss: 0.3426 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8633 top1= 43.2592

Train epoch 10
[E10B0  |    384/60000 (  1%) ] Loss: 0.3809 top1= 95.0000

=== Log global consensus distance @ E10B0 ===
consensus_distance=13.962


[E10B10 |   4224/60000 (  7%) ] Loss: 0.3177 top1= 97.8125
[E10B20 |   8064/60000 ( 13%) ] Loss: 0.3129 top1= 98.1250
[E10B30 |  11904/60000 ( 20%) ] Loss: 0.2999 top1= 98.4375
[E10B40 |  15744/60000 ( 26%) ] Loss: 0.3037 top1= 88.4375
[E10B50 |  19584/60000 ( 33%) ] Loss: 0.2974 top1= 88.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8396 top1= 43.6298

Train epoch 11
[E11B0  |    384/60000 (  1%) ] Loss: 0.3541 top1= 86.2500

=== Log global consensus distance @ E11B0 ===
consensus_distance=14.264


[E11B10 |   4224/60000 (  7%) ] Loss: 0.2594 top1= 90.0000
[E11B20 |   8064/60000 ( 13%) ] Loss: 0.2756 top1= 89.0625
[E11B30 |  11904/60000 ( 20%) ] Loss: 0.2546 top1= 90.0000
[E11B40 |  15744/60000 ( 26%) ] Loss: 0.2843 top1= 88.7500
[E11B50 |  19584/60000 ( 33%) ] Loss: 0.2641 top1= 89.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8300 top1= 44.4010

Train epoch 12
[E12B0  |    384/60000 (  1%) ] Loss: 0.3472 top1= 86.2500

=== Log global consensus distance @ E12B0 ===
consensus_distance=14.553


[E12B10 |   4224/60000 (  7%) ] Loss: 0.2473 top1= 89.3750
[E12B20 |   8064/60000 ( 13%) ] Loss: 0.2813 top1= 88.7500
[E12B30 |  11904/60000 ( 20%) ] Loss: 0.2406 top1= 89.6875
[E12B40 |  15744/60000 ( 26%) ] Loss: 0.2861 top1= 86.8750
[E12B50 |  19584/60000 ( 33%) ] Loss: 0.2516 top1= 89.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8261 top1= 44.5312

Train epoch 13
[E13B0  |    384/60000 (  1%) ] Loss: 0.3002 top1= 86.5625

=== Log global consensus distance @ E13B0 ===
consensus_distance=14.839


[E13B10 |   4224/60000 (  7%) ] Loss: 0.2447 top1= 89.0625
[E13B20 |   8064/60000 ( 13%) ] Loss: 0.2346 top1= 89.0625
[E13B30 |  11904/60000 ( 20%) ] Loss: 0.2251 top1= 89.6875
[E13B40 |  15744/60000 ( 26%) ] Loss: 0.2733 top1= 86.8750
[E13B50 |  19584/60000 ( 33%) ] Loss: 0.2583 top1= 88.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8055 top1= 45.3025

Train epoch 14
[E14B0  |    384/60000 (  1%) ] Loss: 0.2780 top1= 87.8125

=== Log global consensus distance @ E14B0 ===
consensus_distance=15.109


[E14B10 |   4224/60000 (  7%) ] Loss: 0.2252 top1= 89.0625
[E14B20 |   8064/60000 ( 13%) ] Loss: 0.2422 top1= 88.7500
[E14B30 |  11904/60000 ( 20%) ] Loss: 0.2278 top1= 89.3750
[E14B40 |  15744/60000 ( 26%) ] Loss: 0.2430 top1= 88.4375
[E14B50 |  19584/60000 ( 33%) ] Loss: 0.2239 top1= 89.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8071 top1= 44.7616

Train epoch 15
[E15B0  |    384/60000 (  1%) ] Loss: 0.2767 top1= 87.5000

=== Log global consensus distance @ E15B0 ===
consensus_distance=15.358


[E15B10 |   4224/60000 (  7%) ] Loss: 0.2080 top1= 89.6875
[E15B20 |   8064/60000 ( 13%) ] Loss: 0.2325 top1= 89.0625
[E15B30 |  11904/60000 ( 20%) ] Loss: 0.2108 top1= 89.6875
[E15B40 |  15744/60000 ( 26%) ] Loss: 0.2569 top1= 87.8125
[E15B50 |  19584/60000 ( 33%) ] Loss: 0.2229 top1= 89.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8079 top1= 45.0120

Train epoch 16
[E16B0  |    384/60000 (  1%) ] Loss: 0.2926 top1= 86.5625

=== Log global consensus distance @ E16B0 ===
consensus_distance=15.589


[E16B10 |   4224/60000 (  7%) ] Loss: 0.1996 top1= 89.6875
[E16B20 |   8064/60000 ( 13%) ] Loss: 0.2574 top1= 88.1250
[E16B30 |  11904/60000 ( 20%) ] Loss: 0.2027 top1= 90.0000
[E16B40 |  15744/60000 ( 26%) ] Loss: 0.2363 top1= 88.4375
[E16B50 |  19584/60000 ( 33%) ] Loss: 0.2028 top1= 89.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7881 top1= 45.3325

Train epoch 17
[E17B0  |    384/60000 (  1%) ] Loss: 0.2640 top1= 87.5000

=== Log global consensus distance @ E17B0 ===
consensus_distance=15.811


[E17B10 |   4224/60000 (  7%) ] Loss: 0.1978 top1= 89.6875
[E17B20 |   8064/60000 ( 13%) ] Loss: 0.2334 top1= 89.0625
[E17B30 |  11904/60000 ( 20%) ] Loss: 0.2087 top1= 89.6875
[E17B40 |  15744/60000 ( 26%) ] Loss: 0.2401 top1= 87.5000
[E17B50 |  19584/60000 ( 33%) ] Loss: 0.2099 top1= 89.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7866 top1= 46.6346

Train epoch 18
[E18B0  |    384/60000 (  1%) ] Loss: 0.2676 top1= 87.5000

=== Log global consensus distance @ E18B0 ===
consensus_distance=16.015


[E18B10 |   4224/60000 (  7%) ] Loss: 0.2252 top1= 88.4375
[E18B20 |   8064/60000 ( 13%) ] Loss: 0.2178 top1= 88.7500
[E18B30 |  11904/60000 ( 20%) ] Loss: 0.2197 top1= 88.4375
[E18B40 |  15744/60000 ( 26%) ] Loss: 0.2042 top1= 89.3750
[E18B50 |  19584/60000 ( 33%) ] Loss: 0.2330 top1= 88.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7774 top1= 46.7248

Train epoch 19
[E19B0  |    384/60000 (  1%) ] Loss: 0.2612 top1= 87.5000

=== Log global consensus distance @ E19B0 ===
consensus_distance=16.209


[E19B10 |   4224/60000 (  7%) ] Loss: 0.1909 top1= 89.6875
[E19B20 |   8064/60000 ( 13%) ] Loss: 0.2344 top1= 88.4375
[E19B30 |  11904/60000 ( 20%) ] Loss: 0.1987 top1= 89.6875
[E19B40 |  15744/60000 ( 26%) ] Loss: 0.2146 top1= 88.7500
[E19B50 |  19584/60000 ( 33%) ] Loss: 0.2152 top1= 89.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7752 top1= 46.0337

Train epoch 20
[E20B0  |    384/60000 (  1%) ] Loss: 0.2704 top1= 87.1875

=== Log global consensus distance @ E20B0 ===
consensus_distance=16.397


[E20B10 |   4224/60000 (  7%) ] Loss: 0.1837 top1= 89.6875
[E20B20 |   8064/60000 ( 13%) ] Loss: 0.2354 top1= 88.7500
[E20B30 |  11904/60000 ( 20%) ] Loss: 0.2004 top1= 89.6875
[E20B40 |  15744/60000 ( 26%) ] Loss: 0.2378 top1= 87.8125
[E20B50 |  19584/60000 ( 33%) ] Loss: 0.1948 top1= 89.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7635 top1= 46.7448

Train epoch 21
[E21B0  |    384/60000 (  1%) ] Loss: 0.2507 top1= 87.8125

=== Log global consensus distance @ E21B0 ===
consensus_distance=16.568


[E21B10 |   4224/60000 (  7%) ] Loss: 0.1818 top1= 90.0000
[E21B20 |   8064/60000 ( 13%) ] Loss: 0.2961 top1= 86.5625
[E21B30 |  11904/60000 ( 20%) ] Loss: 0.1996 top1= 90.0000
[E21B40 |  15744/60000 ( 26%) ] Loss: 0.2131 top1= 89.0625
[E21B50 |  19584/60000 ( 33%) ] Loss: 0.1929 top1= 89.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7624 top1= 45.1723

Train epoch 22
[E22B0  |    384/60000 (  1%) ] Loss: 0.2734 top1= 86.8750

=== Log global consensus distance @ E22B0 ===
consensus_distance=16.739


[E22B10 |   4224/60000 (  7%) ] Loss: 0.1879 top1= 89.3750
[E22B20 |   8064/60000 ( 13%) ] Loss: 0.2186 top1= 88.7500
[E22B30 |  11904/60000 ( 20%) ] Loss: 0.1908 top1= 90.0000
[E22B40 |  15744/60000 ( 26%) ] Loss: 0.2128 top1= 89.0625
[E22B50 |  19584/60000 ( 33%) ] Loss: 0.2215 top1= 88.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7604 top1= 45.4327

Train epoch 23
[E23B0  |    384/60000 (  1%) ] Loss: 0.2624 top1= 86.8750

=== Log global consensus distance @ E23B0 ===
consensus_distance=16.894


[E23B10 |   4224/60000 (  7%) ] Loss: 0.1763 top1= 90.0000
[E23B20 |   8064/60000 ( 13%) ] Loss: 0.2401 top1= 88.1250
[E23B30 |  11904/60000 ( 20%) ] Loss: 0.2035 top1= 89.0625
[E23B40 |  15744/60000 ( 26%) ] Loss: 0.2197 top1= 88.1250
[E23B50 |  19584/60000 ( 33%) ] Loss: 0.2003 top1= 89.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7495 top1= 45.9435

Train epoch 24
[E24B0  |    384/60000 (  1%) ] Loss: 0.2681 top1= 87.1875

=== Log global consensus distance @ E24B0 ===
consensus_distance=17.039


[E24B10 |   4224/60000 (  7%) ] Loss: 0.1898 top1= 89.0625
[E24B20 |   8064/60000 ( 13%) ] Loss: 0.2117 top1= 89.0625
[E24B30 |  11904/60000 ( 20%) ] Loss: 0.1836 top1= 90.0000
[E24B40 |  15744/60000 ( 26%) ] Loss: 0.2031 top1= 89.3750
[E24B50 |  19584/60000 ( 33%) ] Loss: 0.1928 top1= 89.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7582 top1= 47.1655

Train epoch 25
[E25B0  |    384/60000 (  1%) ] Loss: 0.2453 top1= 87.8125

=== Log global consensus distance @ E25B0 ===
consensus_distance=17.182


[E25B10 |   4224/60000 (  7%) ] Loss: 0.1826 top1= 89.6875
[E25B20 |   8064/60000 ( 13%) ] Loss: 0.2040 top1= 98.7500
[E25B30 |  11904/60000 ( 20%) ] Loss: 0.1933 top1= 89.6875
[E25B40 |  15744/60000 ( 26%) ] Loss: 0.2043 top1= 89.3750
[E25B50 |  19584/60000 ( 33%) ] Loss: 0.1823 top1= 89.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7451 top1= 46.6546

Train epoch 26
[E26B0  |    384/60000 (  1%) ] Loss: 0.2444 top1= 96.8750

=== Log global consensus distance @ E26B0 ===
consensus_distance=17.319


[E26B10 |   4224/60000 (  7%) ] Loss: 0.1879 top1= 98.7500
[E26B20 |   8064/60000 ( 13%) ] Loss: 0.2043 top1= 98.7500
[E26B30 |  11904/60000 ( 20%) ] Loss: 0.1793 top1= 90.0000
[E26B40 |  15744/60000 ( 26%) ] Loss: 0.3287 top1= 86.2500
[E26B50 |  19584/60000 ( 33%) ] Loss: 0.1894 top1= 89.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7421 top1= 46.7748

Train epoch 27
[E27B0  |    384/60000 (  1%) ] Loss: 0.2446 top1= 88.1250

=== Log global consensus distance @ E27B0 ===
consensus_distance=17.446


[E27B10 |   4224/60000 (  7%) ] Loss: 0.1703 top1= 90.0000
[E27B20 |   8064/60000 ( 13%) ] Loss: 0.2186 top1= 88.7500
[E27B30 |  11904/60000 ( 20%) ] Loss: 0.1828 top1= 90.0000
[E27B40 |  15744/60000 ( 26%) ] Loss: 0.2091 top1= 89.0625
[E27B50 |  19584/60000 ( 33%) ] Loss: 0.1981 top1= 89.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7404 top1= 45.6931

Train epoch 28
[E28B0  |    384/60000 (  1%) ] Loss: 0.2608 top1= 96.5625

=== Log global consensus distance @ E28B0 ===
consensus_distance=17.579


[E28B10 |   4224/60000 (  7%) ] Loss: 0.1717 top1= 99.6875
[E28B20 |   8064/60000 ( 13%) ] Loss: 0.2154 top1= 88.7500
[E28B30 |  11904/60000 ( 20%) ] Loss: 0.1785 top1= 90.0000
[E28B40 |  15744/60000 ( 26%) ] Loss: 0.2128 top1= 99.0625
[E28B50 |  19584/60000 ( 33%) ] Loss: 0.1794 top1= 89.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7322 top1= 46.9050

Train epoch 29
[E29B0  |    384/60000 (  1%) ] Loss: 0.2562 top1= 87.1875

=== Log global consensus distance @ E29B0 ===
consensus_distance=17.693


[E29B10 |   4224/60000 (  7%) ] Loss: 0.1683 top1= 99.6875
[E29B20 |   8064/60000 ( 13%) ] Loss: 0.2093 top1= 98.7500
[E29B30 |  11904/60000 ( 20%) ] Loss: 0.1885 top1= 89.6875
[E29B40 |  15744/60000 ( 26%) ] Loss: 0.2260 top1= 87.8125
[E29B50 |  19584/60000 ( 33%) ] Loss: 0.1870 top1= 89.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7304 top1= 47.5962

Train epoch 30
[E30B0  |    384/60000 (  1%) ] Loss: 0.2426 top1= 88.1250

=== Log global consensus distance @ E30B0 ===
consensus_distance=17.802


[E30B10 |   4224/60000 (  7%) ] Loss: 0.2578 top1= 87.8125
[E30B20 |   8064/60000 ( 13%) ] Loss: 0.2259 top1= 88.4375
[E30B30 |  11904/60000 ( 20%) ] Loss: 0.1834 top1= 90.0000
[E30B40 |  15744/60000 ( 26%) ] Loss: 0.2303 top1= 88.1250
[E30B50 |  19584/60000 ( 33%) ] Loss: 0.1778 top1= 99.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7091 top1= 45.6530

