
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker ByzantineWorker(index=9)
=> Add worker ByzantineWorker(index=10)

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7fe306be8400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([1, 1, 2, 1, 5], device='cuda:0')
Worker 10 has targets: tensor([8, 9, 1, 0, 3], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.000


[E 1B10 |   3872/60000 (  6%) ] Loss: 1.3435 top1= 67.0139
[E 1B20 |   7392/60000 ( 12%) ] Loss: 1.0810 top1= 58.3333
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.6576 top1= 81.2500
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.4454 top1= 87.8472
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.3806 top1= 88.8889

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7978 top1= 73.4275

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.4017 top1= 86.4583

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.065


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.5300 top1= 81.2500
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.4622 top1= 85.7639
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.4916 top1= 83.3333
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.3736 top1= 87.5000
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.3435 top1= 88.1944

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7882 top1= 74.4491

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.2729 top1= 90.6250

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.059


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.2818 top1= 89.5833
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.2749 top1= 91.3194
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.2728 top1= 91.3194
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.3387 top1= 89.9306
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.2380 top1= 91.3194

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8397 top1= 70.4327

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.4136 top1= 87.1528

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.057


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.4919 top1= 81.5972
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.3537 top1= 85.4167
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.3058 top1= 88.5417
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.3030 top1= 90.2778
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.2250 top1= 92.3611

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7653 top1= 73.4876

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.2033 top1= 92.0139

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.033


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.3096 top1= 89.9306
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.3189 top1= 88.5417
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.3511 top1= 89.2361
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.2591 top1= 90.2778
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.2242 top1= 93.4028

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8999 top1= 68.1490

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.2411 top1= 91.6667

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.036


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.2285 top1= 94.7917
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.2242 top1= 93.0556
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.2183 top1= 93.0556
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.1808 top1= 94.0972
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.1912 top1= 92.3611

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7149 top1= 76.1619

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.1409 top1= 95.8333

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.035


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.1629 top1= 95.1389
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.1842 top1= 93.7500
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.1821 top1= 94.7917
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.1836 top1= 93.7500
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.2134 top1= 91.3194

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7389 top1= 75.8413

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.2776 top1= 89.2361

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.047


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.3190 top1= 89.2361
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.4333 top1= 81.2500
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.2985 top1= 88.1944
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.2156 top1= 93.0556
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.2668 top1= 91.6667

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6523 top1= 78.8061

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.1953 top1= 93.4028

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.030


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.1655 top1= 95.4861
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.2439 top1= 90.9722
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.2568 top1= 88.8889
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.1838 top1= 93.4028
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.1820 top1= 93.4028

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8879 top1= 74.9099

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.3130 top1= 88.5417

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.028


[E10B10 |   3872/60000 (  6%) ] Loss: 0.2283 top1= 91.6667
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.2326 top1= 92.0139
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.2072 top1= 93.7500
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.1643 top1= 95.4861
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.1520 top1= 96.1806

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7334 top1= 77.1034

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 0.1931 top1= 94.4444

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.037


[E11B10 |   3872/60000 (  6%) ] Loss: 0.3244 top1= 90.6250
[E11B20 |   7392/60000 ( 12%) ] Loss: 0.2835 top1= 88.1944
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.2906 top1= 88.1944
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.2612 top1= 89.9306
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.1465 top1= 96.5278

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7037 top1= 76.8229

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 0.2126 top1= 93.0556

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.037


[E12B10 |   3872/60000 (  6%) ] Loss: 0.8543 top1= 75.0000
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.3977 top1= 84.7222
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.2970 top1= 89.2361
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.2926 top1= 90.9722
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.4553 top1= 85.0694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8374 top1= 72.4559

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.2062 top1= 93.0556

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.056


[E13B10 |   3872/60000 (  6%) ] Loss: 0.2402 top1= 93.7500
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.2718 top1= 89.5833
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.2511 top1= 88.8889
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.1744 top1= 93.0556
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.1840 top1= 92.3611

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7125 top1= 76.1018

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.2530 top1= 89.5833

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.034


[E14B10 |   3872/60000 (  6%) ] Loss: 0.1830 top1= 94.0972
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.2612 top1= 90.6250
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.2428 top1= 90.2778
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.1438 top1= 96.8750
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.1973 top1= 93.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6907 top1= 77.1334

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.1864 top1= 93.0556

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.031


[E15B10 |   3872/60000 (  6%) ] Loss: 0.2355 top1= 89.5833
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.2611 top1= 89.2361
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.2109 top1= 92.3611
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.1699 top1= 96.5278
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.1912 top1= 93.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6302 top1= 78.3954

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.1822 top1= 94.4444

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.029


[E16B10 |   3872/60000 (  6%) ] Loss: 0.1687 top1= 94.0972
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.1924 top1= 92.0139
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.2961 top1= 87.5000
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.1794 top1= 94.7917
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.1832 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6609 top1= 77.5441

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.1234 top1= 96.8750

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.044


[E17B10 |   3872/60000 (  6%) ] Loss: 0.1622 top1= 95.8333
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.1933 top1= 93.7500
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.1850 top1= 93.4028
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.1335 top1= 96.5278
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.1841 top1= 92.0139

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6507 top1= 77.1034

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.1229 top1= 95.4861

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.027


[E18B10 |   3872/60000 (  6%) ] Loss: 0.2682 top1= 88.5417
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.2173 top1= 93.0556
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.3212 top1= 86.1111
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.2120 top1= 91.3194
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.1796 top1= 92.0139

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5399 top1= 80.9996

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.1462 top1= 93.7500

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.023


[E19B10 |   3872/60000 (  6%) ] Loss: 0.2043 top1= 91.6667
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.2654 top1= 89.2361
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.1672 top1= 92.7083
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.2263 top1= 90.9722
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.2151 top1= 90.9722

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5652 top1= 79.8778

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.1847 top1= 93.7500

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.030


[E20B10 |   3872/60000 (  6%) ] Loss: 0.1888 top1= 91.6667
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.2304 top1= 90.2778
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.2495 top1= 89.5833
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.1601 top1= 94.0972
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.1310 top1= 93.4028

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7076 top1= 76.1018

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.1990 top1= 93.0556

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.032


[E21B10 |   3872/60000 (  6%) ] Loss: 0.1839 top1= 93.0556
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.1324 top1= 95.8333
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.1415 top1= 94.4444
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.3025 top1= 90.2778
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.1862 top1= 93.0556

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7270 top1= 76.0517

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.1477 top1= 94.7917

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.037


[E22B10 |   3872/60000 (  6%) ] Loss: 0.2034 top1= 90.2778
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.2311 top1= 89.2361
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.1727 top1= 94.7917
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.1354 top1= 95.4861
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.1414 top1= 93.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5692 top1= 79.9679

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.1145 top1= 96.8750

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.037


[E23B10 |   3872/60000 (  6%) ] Loss: 0.1343 top1= 98.2639
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.2454 top1= 90.2778
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.2173 top1= 93.4028
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.1384 top1= 95.8333
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.1234 top1= 96.8750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6522 top1= 78.7460

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.1789 top1= 94.0972

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.036


[E24B10 |   3872/60000 (  6%) ] Loss: 0.1584 top1= 94.0972
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.2086 top1= 91.6667
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.2137 top1= 91.3194
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.2344 top1= 91.3194
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.2483 top1= 94.0972

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5810 top1= 79.4571

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.1701 top1= 94.7917

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.039


[E25B10 |   3872/60000 (  6%) ] Loss: 0.1399 top1= 95.1389
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.1812 top1= 92.7083
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.2065 top1= 93.0556
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.1915 top1= 94.4444
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.2021 top1= 91.6667

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6796 top1= 77.7444

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.1918 top1= 94.4444

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.031


[E26B10 |   3872/60000 (  6%) ] Loss: 0.1796 top1= 94.7917
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.2374 top1= 91.6667
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.2381 top1= 92.0139
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.2123 top1= 92.0139
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.1529 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6518 top1= 78.2151

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.2098 top1= 92.7083

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.037


[E27B10 |   3872/60000 (  6%) ] Loss: 0.1490 top1= 95.1389
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.2599 top1= 89.5833
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.1904 top1= 92.7083
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.1825 top1= 94.7917
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.1405 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5209 top1= 80.7292

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.1616 top1= 93.7500

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.025


[E28B10 |   3872/60000 (  6%) ] Loss: 0.2118 top1= 93.7500
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.2299 top1= 93.0556
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.2383 top1= 91.3194
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.2139 top1= 90.9722
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.2529 top1= 90.2778

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6897 top1= 75.2905

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.2062 top1= 93.7500

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.035


[E29B10 |   3872/60000 (  6%) ] Loss: 0.2718 top1= 90.2778
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.3250 top1= 87.8472
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.2338 top1= 93.0556
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.1572 top1= 95.4861
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.1875 top1= 92.7083

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6995 top1= 77.5240

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.1551 top1= 95.1389

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.036


[E30B10 |   3872/60000 (  6%) ] Loss: 0.1451 top1= 95.4861
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.1513 top1= 95.4861
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.2141 top1= 91.6667
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.1926 top1= 94.7917
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.1521 top1= 94.7917

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7707 top1= 76.2620

