
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker LabelFlippingWorker
=> Add worker LabelFlippingWorker

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7f3332760400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([8, 8, 7, 8, 4], device='cuda:0')
Worker 10 has targets: tensor([1, 0, 8, 9, 6], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.015


[E 1B10 |   3872/60000 (  6%) ] Loss: 0.7271 top1= 84.0278
[E 1B20 |   7392/60000 ( 12%) ] Loss: 0.6156 top1= 84.0278
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.4327 top1= 88.1944
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.3723 top1= 89.5833
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.4266 top1= 90.2778

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.6246 top1= 53.2151

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.2313 top1= 95.1389

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.332


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.4109 top1= 86.1111
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.3523 top1= 87.5000
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.3041 top1= 90.9722
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.3845 top1= 91.3194
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.2664 top1= 92.0139

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3912 top1= 52.0733

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.1543 top1= 96.5278

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.557


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.1402 top1= 95.4861
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.1881 top1= 93.0556
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.1529 top1= 96.5278
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.1583 top1= 94.7917
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.1200 top1= 97.9167

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2161 top1= 60.3265

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.0702 top1= 99.3056

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.618


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.1384 top1= 95.4861
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.1434 top1= 93.7500
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.3526 top1= 95.4861
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.1904 top1= 93.4028
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.0998 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2893 top1= 58.0329

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.1475 top1= 96.5278

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.783


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.0744 top1= 97.9167
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.3311 top1= 90.9722
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.0654 top1= 97.2222
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.1240 top1= 96.1806
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.1029 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2324 top1= 59.1046

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.1071 top1= 97.5694

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.934


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.0734 top1= 98.9583
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.1882 top1= 93.7500
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.4881 top1= 96.8750
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.0632 top1= 98.6111
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.0904 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2754 top1= 58.0128

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.0635 top1= 98.2639

=== Log global consensus distance @ E7B0 ===
consensus_distance=1.070


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.1877 top1= 94.4444
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.1496 top1= 95.8333
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.0747 top1= 97.2222
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.1032 top1= 96.8750
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.0796 top1= 97.5694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1761 top1= 61.5986

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.0516 top1= 99.3056

=== Log global consensus distance @ E8B0 ===
consensus_distance=1.137


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.0621 top1= 98.6111
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.1079 top1= 95.8333
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.3978 top1= 96.8750
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.2095 top1= 96.8750
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.0542 top1= 98.9583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2538 top1= 60.8273

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.0475 top1= 98.2639

=== Log global consensus distance @ E9B0 ===
consensus_distance=1.275


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.0799 top1= 97.5694
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.1030 top1= 96.5278
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.3144 top1= 97.2222
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.1898 top1= 96.1806
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.0422 top1= 98.9583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0911 top1= 63.0509

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.0486 top1= 98.9583

=== Log global consensus distance @ E10B0 ===
consensus_distance=1.407


[E10B10 |   3872/60000 (  6%) ] Loss: 0.0609 top1= 98.9583
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.1230 top1= 95.4861
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.1242 top1= 97.2222
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.2233 top1= 96.1806
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.0867 top1= 96.1806

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9470 top1= 68.5597

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 0.0359 top1= 98.9583

=== Log global consensus distance @ E11B0 ===
consensus_distance=1.413


[E11B10 |   3872/60000 (  6%) ] Loss: 0.0555 top1= 98.2639
[E11B20 |   7392/60000 ( 12%) ] Loss: 0.0426 top1= 98.6111
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.0196 top1= 99.3056
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.0505 top1= 98.9583
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.0348 top1= 98.9583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9163 top1= 69.8017

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 0.0226 top1= 99.3056

=== Log global consensus distance @ E12B0 ===
consensus_distance=1.469


[E12B10 |   3872/60000 (  6%) ] Loss: 0.0361 top1= 98.9583
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.1021 top1= 96.5278
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.3324 top1= 97.5694
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.0317 top1= 98.6111
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.1664 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9078 top1= 69.7516

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.0217 top1= 99.6528

=== Log global consensus distance @ E13B0 ===
consensus_distance=1.520


[E13B10 |   3872/60000 (  6%) ] Loss: 0.0300 top1= 99.6528
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.0957 top1= 96.8750
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.3455 top1= 97.2222
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.0420 top1= 98.6111
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.0225 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9265 top1= 68.5897

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.0248 top1= 99.3056

=== Log global consensus distance @ E14B0 ===
consensus_distance=1.566


[E14B10 |   3872/60000 (  6%) ] Loss: 0.0388 top1= 98.2639
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.1347 top1= 96.1806
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.1116 top1= 97.5694
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.0459 top1= 98.6111
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.0484 top1= 98.2639

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9363 top1= 68.9603

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.0233 top1= 99.3056

=== Log global consensus distance @ E15B0 ===
consensus_distance=1.589


[E15B10 |   3872/60000 (  6%) ] Loss: 0.0230 top1= 99.6528
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.0122 top1=100.0000
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.3404 top1= 97.5694
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.0276 top1= 99.3056
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.0332 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9737 top1= 68.4796

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.0325 top1= 98.9583

=== Log global consensus distance @ E16B0 ===
consensus_distance=1.631


[E16B10 |   3872/60000 (  6%) ] Loss: 0.0354 top1= 99.3056
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.2727 top1= 94.0972
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.1155 top1= 97.5694
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.2706 top1= 96.8750
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.0055 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9462 top1= 68.4495

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.0216 top1= 99.6528

=== Log global consensus distance @ E17B0 ===
consensus_distance=1.673


[E17B10 |   3872/60000 (  6%) ] Loss: 0.0484 top1= 98.6111
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.0288 top1= 98.9583
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.0188 top1= 99.3056
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.0325 top1= 98.9583
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.0540 top1= 98.2639

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9552 top1= 68.2592

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.0200 top1= 99.3056

=== Log global consensus distance @ E18B0 ===
consensus_distance=1.651


[E18B10 |   3872/60000 (  6%) ] Loss: 0.0292 top1= 98.9583
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.1039 top1= 96.1806
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.0470 top1= 98.6111
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.1543 top1= 96.1806
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.0255 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9651 top1= 69.0004

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.0189 top1= 99.3056

=== Log global consensus distance @ E19B0 ===
consensus_distance=1.689


[E19B10 |   3872/60000 (  6%) ] Loss: 0.0768 top1= 96.8750
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.0683 top1= 97.2222
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.0184 top1= 99.3056
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.0271 top1= 99.6528
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.0387 top1= 98.9583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9030 top1= 69.8017

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.0228 top1= 98.9583

=== Log global consensus distance @ E20B0 ===
consensus_distance=1.732


[E20B10 |   3872/60000 (  6%) ] Loss: 0.0386 top1= 98.9583
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.0094 top1=100.0000
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.0133 top1= 99.6528
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.0297 top1= 98.6111
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.0044 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8762 top1= 70.0921

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.0163 top1= 99.3056

=== Log global consensus distance @ E21B0 ===
consensus_distance=1.774


[E21B10 |   3872/60000 (  6%) ] Loss: 0.0292 top1= 98.9583
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.0092 top1=100.0000
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.0141 top1=100.0000
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.0238 top1= 99.6528
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.0114 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9348 top1= 69.3109

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.0125 top1= 99.6528

=== Log global consensus distance @ E22B0 ===
consensus_distance=1.782


[E22B10 |   3872/60000 (  6%) ] Loss: 0.0163 top1= 98.9583
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.0142 top1= 99.6528
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.4797 top1= 97.5694
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.0180 top1= 99.6528
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.0095 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9000 top1= 69.9619

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.0230 top1= 99.6528

=== Log global consensus distance @ E23B0 ===
consensus_distance=1.813


[E23B10 |   3872/60000 (  6%) ] Loss: 0.0072 top1=100.0000
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.1003 top1= 95.8333
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.0092 top1=100.0000
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.0913 top1= 97.2222
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.0057 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9655 top1= 68.5096

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.0159 top1= 99.3056

=== Log global consensus distance @ E24B0 ===
consensus_distance=1.847


[E24B10 |   3872/60000 (  6%) ] Loss: 0.0348 top1= 98.6111
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.0997 top1= 96.5278
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.2360 top1= 97.5694
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.0325 top1= 98.6111
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.0033 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8983 top1= 68.8101

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.1889 top1= 93.7500

=== Log global consensus distance @ E25B0 ===
consensus_distance=1.828


[E25B10 |   3872/60000 (  6%) ] Loss: 0.0369 top1= 98.6111
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.0545 top1= 97.9167
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.1396 top1= 97.5694
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.0238 top1= 99.3056
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.0053 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9489 top1= 69.4712

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.0088 top1= 99.3056

=== Log global consensus distance @ E26B0 ===
consensus_distance=1.865


[E26B10 |   3872/60000 (  6%) ] Loss: 0.0371 top1= 98.6111
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.0194 top1= 99.3056
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.0083 top1= 99.6528
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.0114 top1= 99.3056
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.0089 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9191 top1= 69.8718

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.0132 top1= 99.3056

=== Log global consensus distance @ E27B0 ===
consensus_distance=1.903


[E27B10 |   3872/60000 (  6%) ] Loss: 0.0260 top1= 98.6111
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.0050 top1=100.0000
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.0092 top1=100.0000
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.0219 top1= 99.3056
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.0212 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8795 top1= 70.1723

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.0101 top1= 99.6528

=== Log global consensus distance @ E28B0 ===
consensus_distance=1.935


[E28B10 |   3872/60000 (  6%) ] Loss: 0.0346 top1= 98.6111
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.0092 top1=100.0000
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.2191 top1= 97.5694
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.0407 top1= 98.2639
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.0661 top1= 97.5694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9749 top1= 68.9603

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.0128 top1= 99.6528

=== Log global consensus distance @ E29B0 ===
consensus_distance=1.934


[E29B10 |   3872/60000 (  6%) ] Loss: 0.0325 top1= 98.6111
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.0141 top1= 99.6528
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.4896 top1= 97.5694
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.0200 top1= 99.3056
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.0682 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9300 top1= 69.4111

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.0072 top1= 99.6528

=== Log global consensus distance @ E30B0 ===
consensus_distance=1.957


[E30B10 |   3872/60000 (  6%) ] Loss: 0.0151 top1= 99.6528
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.0038 top1=100.0000
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.2672 top1= 97.5694
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.0624 top1= 97.9167
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.0169 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9458 top1= 69.4311

