
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker SGDMWorker(index=9, momentum=0.9)
=> Add worker ByzantineWorker(index=10)
=> Add worker ByzantineWorker(index=11)

=== Start adding graph ===
<codes.graph_utils.RandomSmallWorldGraph object at 0x7f10d8d93400>

Train epoch 1
[E 1B0  |    384/60000 (  1%) ] Loss: 2.3054 top1= 10.0000

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([1, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([2, 3, 3, 3, 3], device='cuda:0')
Worker 4 has targets: tensor([3, 4, 4, 4, 4], device='cuda:0')
Worker 5 has targets: tensor([4, 5, 5, 5, 5], device='cuda:0')
Worker 6 has targets: tensor([6, 6, 6, 6, 6], device='cuda:0')
Worker 7 has targets: tensor([7, 7, 7, 7, 7], device='cuda:0')
Worker 8 has targets: tensor([7, 8, 8, 8, 8], device='cuda:0')
Worker 9 has targets: tensor([8, 9, 9, 9, 9], device='cuda:0')
Worker 10 has targets: tensor([4, 8, 8, 6, 9], device='cuda:0')
Worker 11 has targets: tensor([5, 3, 6, 0, 9], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.001



=== Log average shortest path distance for small world @ E1B0 ===
2.7777777777777777


[E 1B10 |   4224/60000 (  7%) ] Loss: 0.1676 top1= 95.6250
[E 1B20 |   8064/60000 ( 13%) ] Loss: 0.0617 top1= 98.4375
[E 1B30 |  11904/60000 ( 20%) ] Loss: 0.0877 top1= 96.8750
[E 1B40 |  15744/60000 ( 26%) ] Loss: 0.0754 top1= 98.1250
[E 1B50 |  19584/60000 ( 33%) ] Loss: 0.0913 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7851 top1= 73.0068

Train epoch 2
[E 2B0  |    384/60000 (  1%) ] Loss: 0.1147 top1= 96.8750

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.054


[E 2B10 |   4224/60000 (  7%) ] Loss: 0.0372 top1= 99.6875
[E 2B20 |   8064/60000 ( 13%) ] Loss: 0.0347 top1= 98.7500
[E 2B30 |  11904/60000 ( 20%) ] Loss: 0.0418 top1= 98.4375
[E 2B40 |  15744/60000 ( 26%) ] Loss: 0.0547 top1= 98.7500
[E 2B50 |  19584/60000 ( 33%) ] Loss: 0.0565 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5666 top1= 79.9780

Train epoch 3
[E 3B0  |    384/60000 (  1%) ] Loss: 0.0830 top1= 98.1250

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.038


[E 3B10 |   4224/60000 (  7%) ] Loss: 0.0309 top1= 99.3750
[E 3B20 |   8064/60000 ( 13%) ] Loss: 0.0277 top1= 99.0625
[E 3B30 |  11904/60000 ( 20%) ] Loss: 0.0367 top1= 98.1250
[E 3B40 |  15744/60000 ( 26%) ] Loss: 0.0559 top1= 98.4375
[E 3B50 |  19584/60000 ( 33%) ] Loss: 0.0524 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5374 top1= 81.7408

Train epoch 4
[E 4B0  |    384/60000 (  1%) ] Loss: 0.0695 top1= 98.1250

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.036


[E 4B10 |   4224/60000 (  7%) ] Loss: 0.0276 top1= 98.7500
[E 4B20 |   8064/60000 ( 13%) ] Loss: 0.0157 top1= 99.3750
[E 4B30 |  11904/60000 ( 20%) ] Loss: 0.0266 top1= 98.7500
[E 4B40 |  15744/60000 ( 26%) ] Loss: 0.0414 top1= 98.4375
[E 4B50 |  19584/60000 ( 33%) ] Loss: 0.0540 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5752 top1= 78.9663

Train epoch 5
[E 5B0  |    384/60000 (  1%) ] Loss: 0.0490 top1= 99.0625

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.034


[E 5B10 |   4224/60000 (  7%) ] Loss: 0.0216 top1= 99.0625
[E 5B20 |   8064/60000 ( 13%) ] Loss: 0.0177 top1= 99.0625
[E 5B30 |  11904/60000 ( 20%) ] Loss: 0.0211 top1= 99.3750
[E 5B40 |  15744/60000 ( 26%) ] Loss: 0.0270 top1= 98.7500
[E 5B50 |  19584/60000 ( 33%) ] Loss: 0.0287 top1= 99.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4672 top1= 83.4235

Train epoch 6
[E 6B0  |    384/60000 (  1%) ] Loss: 0.0355 top1= 99.3750

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.031


[E 6B10 |   4224/60000 (  7%) ] Loss: 0.0236 top1= 99.3750
[E 6B20 |   8064/60000 ( 13%) ] Loss: 0.0124 top1= 99.3750
[E 6B30 |  11904/60000 ( 20%) ] Loss: 0.0171 top1= 99.6875
[E 6B40 |  15744/60000 ( 26%) ] Loss: 0.0476 top1= 98.4375
[E 6B50 |  19584/60000 ( 33%) ] Loss: 0.0411 top1= 99.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4095 top1= 85.6671

Train epoch 7
[E 7B0  |    384/60000 (  1%) ] Loss: 0.0450 top1= 99.0625

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.031


[E 7B10 |   4224/60000 (  7%) ] Loss: 0.0143 top1= 99.3750
[E 7B20 |   8064/60000 ( 13%) ] Loss: 0.0270 top1= 99.3750
[E 7B30 |  11904/60000 ( 20%) ] Loss: 0.0565 top1= 98.4375
[E 7B40 |  15744/60000 ( 26%) ] Loss: 0.0381 top1= 98.4375
[E 7B50 |  19584/60000 ( 33%) ] Loss: 0.0340 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6099 top1= 80.0180

Train epoch 8
[E 8B0  |    384/60000 (  1%) ] Loss: 0.0611 top1= 98.7500

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.031


[E 8B10 |   4224/60000 (  7%) ] Loss: 0.0109 top1= 99.6875
[E 8B20 |   8064/60000 ( 13%) ] Loss: 0.0219 top1= 99.6875
[E 8B30 |  11904/60000 ( 20%) ] Loss: 0.0094 top1= 99.6875
[E 8B40 |  15744/60000 ( 26%) ] Loss: 0.0269 top1= 99.0625
[E 8B50 |  19584/60000 ( 33%) ] Loss: 0.0141 top1= 99.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5009 top1= 82.6522

Train epoch 9
[E 9B0  |    384/60000 (  1%) ] Loss: 0.0367 top1= 99.0625

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.033


[E 9B10 |   4224/60000 (  7%) ] Loss: 0.0050 top1=100.0000
[E 9B20 |   8064/60000 ( 13%) ] Loss: 0.0114 top1= 99.3750
[E 9B30 |  11904/60000 ( 20%) ] Loss: 0.0322 top1= 99.0625
[E 9B40 |  15744/60000 ( 26%) ] Loss: 0.0339 top1= 99.0625
[E 9B50 |  19584/60000 ( 33%) ] Loss: 0.0150 top1= 99.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4419 top1= 84.8658

Train epoch 10
[E10B0  |    384/60000 (  1%) ] Loss: 0.0291 top1= 98.7500

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.023


[E10B10 |   4224/60000 (  7%) ] Loss: 0.0100 top1= 99.6875
[E10B20 |   8064/60000 ( 13%) ] Loss: 0.0146 top1= 99.6875
[E10B30 |  11904/60000 ( 20%) ] Loss: 0.0058 top1=100.0000
[E10B40 |  15744/60000 ( 26%) ] Loss: 0.0259 top1= 98.4375
[E10B50 |  19584/60000 ( 33%) ] Loss: 0.0109 top1= 99.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4016 top1= 87.2095

Train epoch 11
[E11B0  |    384/60000 (  1%) ] Loss: 0.0313 top1= 98.4375

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.026


[E11B10 |   4224/60000 (  7%) ] Loss: 0.0062 top1=100.0000
[E11B20 |   8064/60000 ( 13%) ] Loss: 0.0049 top1=100.0000
[E11B30 |  11904/60000 ( 20%) ] Loss: 0.0176 top1= 99.3750
[E11B40 |  15744/60000 ( 26%) ] Loss: 0.0092 top1= 99.6875
[E11B50 |  19584/60000 ( 33%) ] Loss: 0.0249 top1= 99.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4416 top1= 85.9675

Train epoch 12
[E12B0  |    384/60000 (  1%) ] Loss: 0.0261 top1= 99.0625

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.034


[E12B10 |   4224/60000 (  7%) ] Loss: 0.0273 top1= 99.3750
[E12B20 |   8064/60000 ( 13%) ] Loss: 0.0034 top1=100.0000
[E12B30 |  11904/60000 ( 20%) ] Loss: 0.0045 top1=100.0000
[E12B40 |  15744/60000 ( 26%) ] Loss: 0.0231 top1= 99.3750
[E12B50 |  19584/60000 ( 33%) ] Loss: 0.0340 top1= 99.0625

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4058 top1= 87.1394

Train epoch 13
[E13B0  |    384/60000 (  1%) ] Loss: 0.0329 top1= 98.7500

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.036


[E13B10 |   4224/60000 (  7%) ] Loss: 0.0058 top1=100.0000
[E13B20 |   8064/60000 ( 13%) ] Loss: 0.0250 top1= 99.3750
[E13B30 |  11904/60000 ( 20%) ] Loss: 0.0210 top1= 99.0625
[E13B40 |  15744/60000 ( 26%) ] Loss: 0.0226 top1= 99.3750
[E13B50 |  19584/60000 ( 33%) ] Loss: 0.0102 top1= 99.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4014 top1= 87.1995

Train epoch 14
[E14B0  |    384/60000 (  1%) ] Loss: 0.0060 top1= 99.6875

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.027


[E14B10 |   4224/60000 (  7%) ] Loss: 0.0308 top1= 98.4375
[E14B20 |   8064/60000 ( 13%) ] Loss: 0.0141 top1= 99.6875
[E14B30 |  11904/60000 ( 20%) ] Loss: 0.0086 top1= 99.6875
[E14B40 |  15744/60000 ( 26%) ] Loss: 0.0134 top1= 99.6875
[E14B50 |  19584/60000 ( 33%) ] Loss: 0.0047 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3478 top1= 89.2428

Train epoch 15
[E15B0  |    384/60000 (  1%) ] Loss: 0.0243 top1= 98.7500

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.022


[E15B10 |   4224/60000 (  7%) ] Loss: 0.0270 top1= 99.0625
[E15B20 |   8064/60000 ( 13%) ] Loss: 0.0040 top1=100.0000
[E15B30 |  11904/60000 ( 20%) ] Loss: 0.0095 top1= 99.6875
[E15B40 |  15744/60000 ( 26%) ] Loss: 0.0071 top1= 99.6875
[E15B50 |  19584/60000 ( 33%) ] Loss: 0.0262 top1= 99.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3824 top1= 88.2312

Train epoch 16
[E16B0  |    384/60000 (  1%) ] Loss: 0.0113 top1= 99.3750

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.023


[E16B10 |   4224/60000 (  7%) ] Loss: 0.0328 top1= 99.6875
[E16B20 |   8064/60000 ( 13%) ] Loss: 0.0063 top1= 99.6875
[E16B30 |  11904/60000 ( 20%) ] Loss: 0.0027 top1=100.0000
[E16B40 |  15744/60000 ( 26%) ] Loss: 0.0073 top1= 99.6875
[E16B50 |  19584/60000 ( 33%) ] Loss: 0.0018 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4107 top1= 87.4499

Train epoch 17
[E17B0  |    384/60000 (  1%) ] Loss: 0.0073 top1= 99.6875

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.038


[E17B10 |   4224/60000 (  7%) ] Loss: 0.0015 top1=100.0000
[E17B20 |   8064/60000 ( 13%) ] Loss: 0.0161 top1= 99.0625
[E17B30 |  11904/60000 ( 20%) ] Loss: 0.0386 top1= 99.6875
[E17B40 |  15744/60000 ( 26%) ] Loss: 0.0178 top1= 99.3750
[E17B50 |  19584/60000 ( 33%) ] Loss: 0.0130 top1= 99.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3678 top1= 89.3429

Train epoch 18
[E18B0  |    384/60000 (  1%) ] Loss: 0.0194 top1= 99.6875

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.034


[E18B10 |   4224/60000 (  7%) ] Loss: 0.0051 top1=100.0000
[E18B20 |   8064/60000 ( 13%) ] Loss: 0.0025 top1=100.0000
[E18B30 |  11904/60000 ( 20%) ] Loss: 0.0049 top1=100.0000
[E18B40 |  15744/60000 ( 26%) ] Loss: 0.0079 top1=100.0000
[E18B50 |  19584/60000 ( 33%) ] Loss: 0.0079 top1= 99.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4031 top1= 87.9507

Train epoch 19
[E19B0  |    384/60000 (  1%) ] Loss: 0.0081 top1=100.0000

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.019


[E19B10 |   4224/60000 (  7%) ] Loss: 0.0042 top1= 99.6875
[E19B20 |   8064/60000 ( 13%) ] Loss: 0.0028 top1=100.0000
[E19B30 |  11904/60000 ( 20%) ] Loss: 0.0184 top1= 99.6875
[E19B40 |  15744/60000 ( 26%) ] Loss: 0.0092 top1= 99.6875
[E19B50 |  19584/60000 ( 33%) ] Loss: 0.0012 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3367 top1= 89.7837

Train epoch 20
[E20B0  |    384/60000 (  1%) ] Loss: 0.0056 top1=100.0000

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.020


[E20B10 |   4224/60000 (  7%) ] Loss: 0.0181 top1= 99.3750
[E20B20 |   8064/60000 ( 13%) ] Loss: 0.0018 top1=100.0000
[E20B30 |  11904/60000 ( 20%) ] Loss: 0.0086 top1= 99.6875
[E20B40 |  15744/60000 ( 26%) ] Loss: 0.0060 top1= 99.6875
[E20B50 |  19584/60000 ( 33%) ] Loss: 0.0019 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3943 top1= 87.7905

Train epoch 21
[E21B0  |    384/60000 (  1%) ] Loss: 0.0028 top1=100.0000

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.018


[E21B10 |   4224/60000 (  7%) ] Loss: 0.0018 top1=100.0000
[E21B20 |   8064/60000 ( 13%) ] Loss: 0.0053 top1= 99.6875
[E21B30 |  11904/60000 ( 20%) ] Loss: 0.0018 top1=100.0000
[E21B40 |  15744/60000 ( 26%) ] Loss: 0.0078 top1=100.0000
[E21B50 |  19584/60000 ( 33%) ] Loss: 0.0022 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4087 top1= 88.1911

Train epoch 22
[E22B0  |    384/60000 (  1%) ] Loss: 0.0383 top1= 99.6875

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.027


[E22B10 |   4224/60000 (  7%) ] Loss: 0.0074 top1= 99.3750
[E22B20 |   8064/60000 ( 13%) ] Loss: 0.0042 top1=100.0000
[E22B30 |  11904/60000 ( 20%) ] Loss: 0.0020 top1=100.0000
[E22B40 |  15744/60000 ( 26%) ] Loss: 0.0136 top1= 99.3750
[E22B50 |  19584/60000 ( 33%) ] Loss: 0.0033 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3990 top1= 88.7720

Train epoch 23
[E23B0  |    384/60000 (  1%) ] Loss: 0.0037 top1=100.0000

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.024


[E23B10 |   4224/60000 (  7%) ] Loss: 0.0048 top1= 99.6875
[E23B20 |   8064/60000 ( 13%) ] Loss: 0.0028 top1=100.0000
[E23B30 |  11904/60000 ( 20%) ] Loss: 0.0042 top1=100.0000
[E23B40 |  15744/60000 ( 26%) ] Loss: 0.0053 top1= 99.6875
[E23B50 |  19584/60000 ( 33%) ] Loss: 0.0099 top1= 99.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3424 top1= 89.4331

Train epoch 24
[E24B0  |    384/60000 (  1%) ] Loss: 0.0099 top1= 99.3750

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.027


[E24B10 |   4224/60000 (  7%) ] Loss: 0.0010 top1=100.0000
[E24B20 |   8064/60000 ( 13%) ] Loss: 0.0021 top1=100.0000
[E24B30 |  11904/60000 ( 20%) ] Loss: 0.0027 top1=100.0000
[E24B40 |  15744/60000 ( 26%) ] Loss: 0.0073 top1=100.0000
[E24B50 |  19584/60000 ( 33%) ] Loss: 0.0007 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4331 top1= 86.5485

Train epoch 25
[E25B0  |    384/60000 (  1%) ] Loss: 0.0018 top1=100.0000

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.013


[E25B10 |   4224/60000 (  7%) ] Loss: 0.0005 top1=100.0000
[E25B20 |   8064/60000 ( 13%) ] Loss: 0.0095 top1= 99.6875
[E25B30 |  11904/60000 ( 20%) ] Loss: 0.0009 top1=100.0000
[E25B40 |  15744/60000 ( 26%) ] Loss: 0.0020 top1=100.0000
[E25B50 |  19584/60000 ( 33%) ] Loss: 0.0020 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3946 top1= 88.1110

Train epoch 26
[E26B0  |    384/60000 (  1%) ] Loss: 0.0033 top1=100.0000

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.011


[E26B10 |   4224/60000 (  7%) ] Loss: 0.0014 top1=100.0000
[E26B20 |   8064/60000 ( 13%) ] Loss: 0.0010 top1=100.0000
[E26B30 |  11904/60000 ( 20%) ] Loss: 0.0013 top1=100.0000
[E26B40 |  15744/60000 ( 26%) ] Loss: 0.0016 top1=100.0000
[E26B50 |  19584/60000 ( 33%) ] Loss: 0.0044 top1= 99.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3221 top1= 90.7452

Train epoch 27
[E27B0  |    384/60000 (  1%) ] Loss: 0.0010 top1=100.0000

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.015


[E27B10 |   4224/60000 (  7%) ] Loss: 0.0025 top1=100.0000
[E27B20 |   8064/60000 ( 13%) ] Loss: 0.0011 top1=100.0000
[E27B30 |  11904/60000 ( 20%) ] Loss: 0.0021 top1=100.0000
[E27B40 |  15744/60000 ( 26%) ] Loss: 0.0015 top1=100.0000
[E27B50 |  19584/60000 ( 33%) ] Loss: 0.0057 top1= 99.6875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2835 top1= 91.9571

Train epoch 28
[E28B0  |    384/60000 (  1%) ] Loss: 0.0046 top1= 99.6875

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.020


[E28B10 |   4224/60000 (  7%) ] Loss: 0.0046 top1=100.0000
[E28B20 |   8064/60000 ( 13%) ] Loss: 0.0008 top1=100.0000
[E28B30 |  11904/60000 ( 20%) ] Loss: 0.0014 top1=100.0000
[E28B40 |  15744/60000 ( 26%) ] Loss: 0.0014 top1=100.0000
[E28B50 |  19584/60000 ( 33%) ] Loss: 0.0014 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3884 top1= 88.3313

Train epoch 29
[E29B0  |    384/60000 (  1%) ] Loss: 0.0008 top1=100.0000

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.020


[E29B10 |   4224/60000 (  7%) ] Loss: 0.0013 top1=100.0000
[E29B20 |   8064/60000 ( 13%) ] Loss: 0.0007 top1=100.0000
[E29B30 |  11904/60000 ( 20%) ] Loss: 0.0122 top1= 99.3750
[E29B40 |  15744/60000 ( 26%) ] Loss: 0.0027 top1=100.0000
[E29B50 |  19584/60000 ( 33%) ] Loss: 0.0025 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3196 top1= 91.3361

Train epoch 30
[E30B0  |    384/60000 (  1%) ] Loss: 0.0013 top1=100.0000

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.018


[E30B10 |   4224/60000 (  7%) ] Loss: 0.0022 top1=100.0000
[E30B20 |   8064/60000 ( 13%) ] Loss: 0.0033 top1=100.0000
[E30B30 |  11904/60000 ( 20%) ] Loss: 0.0020 top1=100.0000
[E30B40 |  15744/60000 ( 26%) ] Loss: 0.0009 top1=100.0000
[E30B50 |  19584/60000 ( 33%) ] Loss: 0.0018 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4223 top1= 89.0525

