
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker ByzantineWorker(index=9)
=> Add worker ByzantineWorker(index=10)

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7f7d0249f400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([1, 1, 2, 1, 5], device='cuda:0')
Worker 10 has targets: tensor([8, 9, 1, 0, 3], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.000


[E 1B10 |   3872/60000 (  6%) ] Loss: 1.3681 top1= 68.4028
[E 1B20 |   7392/60000 ( 12%) ] Loss: 1.1603 top1= 58.6806
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.6521 top1= 78.1250
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.4524 top1= 84.3750
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.3765 top1= 88.5417

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7084 top1= 78.2853

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.2940 top1= 92.0139

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.037


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.4745 top1= 82.9861
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.7334 top1= 75.3472
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.4984 top1= 86.8056
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.4610 top1= 86.8056
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.4702 top1= 83.6806

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8155 top1= 77.2837

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.4378 top1= 85.0694

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.045


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.5082 top1= 81.5972
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.5249 top1= 84.3750
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.4772 top1= 86.1111
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.5328 top1= 84.0278
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.5383 top1= 84.0278

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8766 top1= 76.0917

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.4329 top1= 85.4167

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.041


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.4271 top1= 87.1528
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.5124 top1= 85.4167
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.4747 top1= 85.0694
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.3853 top1= 87.5000
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.3356 top1= 88.1944

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7892 top1= 78.2953

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.4118 top1= 85.7639

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.029


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.6271 top1= 80.2083
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.5669 top1= 82.2917
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.5478 top1= 80.9028
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.4388 top1= 87.1528
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.5025 top1= 84.0278

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8641 top1= 75.9115

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.7292 top1= 79.1667

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.048


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.4416 top1= 86.1111
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.6564 top1= 75.3472
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.4593 top1= 86.4583
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.5094 top1= 82.6389
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.4259 top1= 87.1528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9766 top1= 76.1118

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.6509 top1= 82.6389

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.042


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.4402 top1= 84.7222
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.6415 top1= 80.5556
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.5325 top1= 80.2083
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.5220 top1= 82.6389
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.4686 top1= 85.7639

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7738 top1= 78.9764

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.4927 top1= 84.3750

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.037


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.4387 top1= 84.7222
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.9317 top1= 72.2222
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.5396 top1= 85.7639
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.4917 top1= 82.6389
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.3367 top1= 89.5833

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7801 top1= 78.1651

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.6363 top1= 79.5139

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.057


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.5484 top1= 81.5972
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.4974 top1= 81.9444
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.6831 top1= 79.1667
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.4989 top1= 85.7639
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.5298 top1= 84.0278

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8810 top1= 75.1302

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.5311 top1= 82.6389

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.047


[E10B10 |   3872/60000 (  6%) ] Loss: 0.5052 top1= 83.6806
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.6440 top1= 78.8194
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.6452 top1= 80.9028
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.4832 top1= 82.9861
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.4902 top1= 84.7222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1152 top1= 67.4279

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 1.0208 top1= 73.2639

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.068


[E11B10 |   3872/60000 (  6%) ] Loss: 0.5234 top1= 80.9028
[E11B20 |   7392/60000 ( 12%) ] Loss: 1.0056 top1= 75.0000
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.5873 top1= 79.5139
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.5012 top1= 85.7639
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.4612 top1= 89.2361

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1601 top1= 68.0188

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 1.0108 top1= 77.7778

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.060


[E12B10 |   3872/60000 (  6%) ] Loss: 0.4977 top1= 86.1111
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.7034 top1= 77.7778
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.6416 top1= 82.2917
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.6574 top1= 84.3750
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.6354 top1= 81.5972

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9442 top1= 77.3738

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.6652 top1= 81.5972

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.038


[E13B10 |   3872/60000 (  6%) ] Loss: 0.6146 top1= 81.9444
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.7433 top1= 79.8611
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.5525 top1= 79.5139
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.4618 top1= 84.0278
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.4239 top1= 86.4583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8977 top1= 76.3321

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.5989 top1= 80.2083

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.030


[E14B10 |   3872/60000 (  6%) ] Loss: 0.4599 top1= 84.0278
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.6993 top1= 79.8611
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.5010 top1= 84.3750
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.4183 top1= 86.1111
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.4485 top1= 86.1111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9570 top1= 73.8582

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.6587 top1= 80.2083

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.043


[E15B10 |   3872/60000 (  6%) ] Loss: 0.4213 top1= 85.0694
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.6483 top1= 82.9861
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.4507 top1= 86.4583
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.5627 top1= 81.2500
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.4847 top1= 85.0694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8779 top1= 78.0649

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.6369 top1= 80.5556

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.035


[E16B10 |   3872/60000 (  6%) ] Loss: 0.5141 top1= 84.7222
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.8527 top1= 78.8194
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.5159 top1= 82.2917
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.4456 top1= 88.1944
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.4435 top1= 84.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8382 top1= 79.1967

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.6466 top1= 80.9028

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.027


[E17B10 |   3872/60000 (  6%) ] Loss: 0.6619 top1= 80.5556
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.9601 top1= 74.3056
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.6117 top1= 82.2917
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.4198 top1= 87.1528
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.5628 top1= 85.0694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8785 top1= 73.4575

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.6246 top1= 83.6806

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.054


[E18B10 |   3872/60000 (  6%) ] Loss: 0.4535 top1= 86.8056
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.6126 top1= 83.6806
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.4953 top1= 87.1528
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.3371 top1= 90.2778
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.3675 top1= 88.5417

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7876 top1= 78.9864

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.4230 top1= 88.1944

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.029


[E19B10 |   3872/60000 (  6%) ] Loss: 0.3707 top1= 88.5417
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.7624 top1= 82.9861
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.4334 top1= 86.1111
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.4414 top1= 88.8889
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.5705 top1= 85.0694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8096 top1= 77.8646

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.4571 top1= 85.7639

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.032


[E20B10 |   3872/60000 (  6%) ] Loss: 0.4343 top1= 86.8056
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.7014 top1= 78.8194
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.6877 top1= 83.3333
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.3837 top1= 88.5417
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.4439 top1= 88.5417

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7604 top1= 79.1567

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.3973 top1= 89.5833

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.029


[E21B10 |   3872/60000 (  6%) ] Loss: 0.5033 top1= 85.4167
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.8156 top1= 78.4722
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.4572 top1= 83.6806
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.4462 top1= 86.8056
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.3639 top1= 89.2361

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8458 top1= 75.8413

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.4772 top1= 86.4583

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.035


[E22B10 |   3872/60000 (  6%) ] Loss: 0.3610 top1= 90.6250
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.4988 top1= 84.7222
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.4845 top1= 84.0278
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.2995 top1= 87.8472
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.4262 top1= 87.8472

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7154 top1= 80.5990

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.4818 top1= 85.7639

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.029


[E23B10 |   3872/60000 (  6%) ] Loss: 0.3378 top1= 88.5417
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.7465 top1= 78.4722
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.3777 top1= 88.5417
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.3405 top1= 88.8889
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.3388 top1= 90.2778

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7065 top1= 82.1915

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.4125 top1= 86.8056

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.023


[E24B10 |   3872/60000 (  6%) ] Loss: 0.3972 top1= 88.1944
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.5176 top1= 85.4167
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.4923 top1= 85.4167
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.4071 top1= 87.8472
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.2729 top1= 92.7083

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6871 top1= 78.7360

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.3200 top1= 90.9722

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.024


[E25B10 |   3872/60000 (  6%) ] Loss: 0.3135 top1= 89.5833
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.5199 top1= 83.3333
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.4133 top1= 87.1528
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.2780 top1= 89.9306
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.2538 top1= 90.9722

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8049 top1= 77.8646

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.4177 top1= 85.7639

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.020


[E26B10 |   3872/60000 (  6%) ] Loss: 0.2749 top1= 91.3194
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.5637 top1= 86.1111
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.4216 top1= 87.8472
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.3104 top1= 90.6250
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.2921 top1= 90.2778

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6129 top1= 83.4034

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.2028 top1= 95.8333

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.025


[E27B10 |   3872/60000 (  6%) ] Loss: 0.3237 top1= 89.2361
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.4284 top1= 86.8056
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.4289 top1= 86.8056
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.3173 top1= 90.6250
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.2578 top1= 91.3194

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5761 top1= 84.0445

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.2948 top1= 90.9722

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.018


[E28B10 |   3872/60000 (  6%) ] Loss: 0.3284 top1= 86.8056
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.2860 top1= 90.6250
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.4811 top1= 87.1528
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.3265 top1= 87.5000
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.2606 top1= 92.0139

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6920 top1= 81.7007

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.3041 top1= 90.6250

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.023


[E29B10 |   3872/60000 (  6%) ] Loss: 0.3355 top1= 89.9306
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.2403 top1= 92.3611
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.5392 top1= 84.0278
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.2976 top1= 91.3194
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.2158 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6609 top1= 79.4972

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.3789 top1= 87.8472

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.030


[E30B10 |   3872/60000 (  6%) ] Loss: 0.3084 top1= 89.5833
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.4749 top1= 86.8056
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.3475 top1= 88.8889
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.3318 top1= 90.9722
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.2357 top1= 92.3611

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7125 top1= 78.6258

