
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker BitFlippingWorker
=> Add worker BitFlippingWorker

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7f20e65bf400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([1, 1, 2, 1, 5], device='cuda:0')
Worker 10 has targets: tensor([8, 9, 1, 0, 3], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.325


[E 1B10 |   3872/60000 (  6%) ] Loss: 0.6934 top1= 76.7361
[E 1B20 |   7392/60000 ( 12%) ] Loss: 0.5946 top1= 80.5556
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.3486 top1= 88.5417
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.2399 top1= 93.0556
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.1930 top1= 94.0972

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.5771 top1= 46.1538

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.1104 top1= 97.5694

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.482


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.1055 top1= 95.4861
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.1591 top1= 94.4444
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.1625 top1= 94.4444
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.0799 top1= 98.6111
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.0782 top1= 97.5694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.5812 top1= 45.1823

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.2229 top1= 98.2639

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.625


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.1463 top1= 93.7500
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.2182 top1= 93.7500
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.0697 top1= 98.6111
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.0723 top1= 96.5278
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.4846 top1= 95.8333

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.5044 top1= 41.8470

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.0853 top1= 96.8750

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.725


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.1371 top1= 93.7500
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.1607 top1= 94.7917
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.1059 top1= 97.2222
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.0511 top1= 99.3056
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.5375 top1= 95.8333

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.5123 top1= 42.7183

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.0595 top1= 98.2639

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.739


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.0827 top1= 96.5278
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.2725 top1= 98.2639
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.2469 top1= 93.0556
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.0447 top1= 98.6111
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.1986 top1= 93.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3476 top1= 48.9483

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.1333 top1= 96.5278

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.805


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.1271 top1= 96.8750
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.2493 top1= 90.2778
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.3149 top1= 95.8333
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.0474 top1= 98.6111
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.0576 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3972 top1= 44.9920

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.0879 top1= 96.5278

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.853


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.0760 top1= 97.5694
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.0545 top1= 98.6111
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.1839 top1= 95.8333
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.0583 top1= 98.2639
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.0239 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3427 top1= 48.3974

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.0496 top1= 98.2639

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.928


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.0838 top1= 95.8333
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.2845 top1= 92.3611
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.0942 top1= 96.8750
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.0174 top1= 99.3056
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.1165 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2877 top1= 50.6210

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.0516 top1= 98.2639

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.967


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.1471 top1= 92.7083
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.0487 top1= 98.2639
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.0494 top1= 98.9583
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.0651 top1= 97.2222
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.0440 top1= 97.9167

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3059 top1= 50.1903

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.0299 top1= 98.9583

=== Log global consensus distance @ E10B0 ===
consensus_distance=1.021


[E10B10 |   3872/60000 (  6%) ] Loss: 0.0494 top1= 97.9167
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.2025 top1= 98.9583
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.1311 top1= 95.8333
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.0200 top1= 99.3056
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.0177 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2942 top1= 51.6927

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 0.0439 top1= 98.9583

=== Log global consensus distance @ E11B0 ===
consensus_distance=1.070


[E11B10 |   3872/60000 (  6%) ] Loss: 0.1360 top1= 95.4861
[E11B20 |   7392/60000 ( 12%) ] Loss: 0.0507 top1= 98.6111
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.0245 top1= 99.3056
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.3199 top1= 97.2222
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.0170 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2985 top1= 53.4756

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 0.1080 top1= 98.9583

=== Log global consensus distance @ E12B0 ===
consensus_distance=1.129


[E12B10 |   3872/60000 (  6%) ] Loss: 0.2482 top1= 92.0139
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.0210 top1= 99.6528
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.0327 top1= 99.3056
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.0900 top1= 96.8750
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.0199 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2945 top1= 51.2520

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.0258 top1= 99.3056

=== Log global consensus distance @ E13B0 ===
consensus_distance=1.176


[E13B10 |   3872/60000 (  6%) ] Loss: 0.0311 top1= 99.3056
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.0780 top1= 97.5694
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.0481 top1= 98.9583
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.0175 top1= 99.3056
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.0129 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3278 top1= 51.6727

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.0265 top1= 99.6528

=== Log global consensus distance @ E14B0 ===
consensus_distance=1.224


[E14B10 |   3872/60000 (  6%) ] Loss: 0.0186 top1=100.0000
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.0357 top1= 98.2639
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.1903 top1= 94.7917
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.0143 top1= 99.6528
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.0208 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3075 top1= 49.1086

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.0155 top1= 99.6528

=== Log global consensus distance @ E15B0 ===
consensus_distance=1.281


[E15B10 |   3872/60000 (  6%) ] Loss: 0.0384 top1= 99.3056
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.0361 top1= 99.3056
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.0319 top1= 99.3056
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.0084 top1=100.0000
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.0099 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2827 top1= 51.5925

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.0234 top1= 98.6111

=== Log global consensus distance @ E16B0 ===
consensus_distance=1.330


[E16B10 |   3872/60000 (  6%) ] Loss: 0.8346 top1= 92.0139
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.0304 top1= 99.3056
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.0373 top1= 99.3056
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.0227 top1= 99.3056
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.0094 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2424 top1= 52.8446

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.0210 top1= 99.6528

=== Log global consensus distance @ E17B0 ===
consensus_distance=1.403


[E17B10 |   3872/60000 (  6%) ] Loss: 0.0502 top1= 98.6111
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.0677 top1= 97.5694
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.0551 top1= 99.3056
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.0182 top1= 99.6528
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.0131 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2242 top1= 55.3085

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.0216 top1= 99.6528

=== Log global consensus distance @ E18B0 ===
consensus_distance=1.447


[E18B10 |   3872/60000 (  6%) ] Loss: 0.0189 top1= 99.6528
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.0206 top1= 99.3056
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.0374 top1= 99.3056
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.0182 top1= 99.6528
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.0120 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2482 top1= 51.9431

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.0251 top1= 99.6528

=== Log global consensus distance @ E19B0 ===
consensus_distance=1.490


[E19B10 |   3872/60000 (  6%) ] Loss: 0.0133 top1= 99.6528
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.0148 top1= 99.6528
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.0387 top1= 99.3056
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.0147 top1= 99.6528
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.0080 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2368 top1= 52.8946

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.0196 top1= 99.6528

=== Log global consensus distance @ E20B0 ===
consensus_distance=1.530


[E20B10 |   3872/60000 (  6%) ] Loss: 0.0411 top1= 98.9583
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.0093 top1=100.0000
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.0454 top1= 97.9167
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.0283 top1= 99.3056
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.2456 top1= 96.8750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3079 top1= 53.3153

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.0194 top1= 99.6528

=== Log global consensus distance @ E21B0 ===
consensus_distance=1.560


[E21B10 |   3872/60000 (  6%) ] Loss: 0.0480 top1= 98.9583
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.1119 top1= 98.9583
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.0425 top1= 98.2639
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.0220 top1= 99.3056
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.4403 top1= 96.8750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.6564 top1= 36.7588

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.5408 top1= 95.4861

=== Log global consensus distance @ E22B0 ===
consensus_distance=1.638


[E22B10 |   3872/60000 (  6%) ] Loss: 0.0448 top1= 98.2639
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.0436 top1= 98.2639
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.0307 top1= 98.9583
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.0190 top1= 99.6528
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.0323 top1= 98.2639

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3110 top1= 48.6178

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.0536 top1= 97.2222

=== Log global consensus distance @ E23B0 ===
consensus_distance=1.640


[E23B10 |   3872/60000 (  6%) ] Loss: 0.1036 top1= 96.1806
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.0337 top1= 99.3056
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.0351 top1= 98.9583
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.0361 top1= 98.6111
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.0037 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1711 top1= 55.1482

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.0117 top1= 99.6528

=== Log global consensus distance @ E24B0 ===
consensus_distance=1.686


[E24B10 |   3872/60000 (  6%) ] Loss: 0.0313 top1= 99.3056
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.3301 top1= 97.9167
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.0218 top1= 99.3056
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.1299 top1= 97.5694
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.0288 top1= 98.9583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3470 top1= 51.1018

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.0317 top1= 99.6528

=== Log global consensus distance @ E25B0 ===
consensus_distance=1.707


[E25B10 |   3872/60000 (  6%) ] Loss: 0.0100 top1= 99.6528
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.0117 top1=100.0000
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.0152 top1= 99.3056
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.2863 top1= 97.2222
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.0072 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1823 top1= 54.3069

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.0106 top1= 99.6528

=== Log global consensus distance @ E26B0 ===
consensus_distance=1.743


[E26B10 |   3872/60000 (  6%) ] Loss: 0.0072 top1=100.0000
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.0063 top1=100.0000
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.0130 top1= 99.3056
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.0163 top1= 99.6528
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.0057 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7109 top1= 36.2680

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.3555 top1= 95.4861

=== Log global consensus distance @ E27B0 ===
consensus_distance=1.807


[E27B10 |   3872/60000 (  6%) ] Loss: 0.0168 top1= 99.6528
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.0143 top1= 99.6528
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.0239 top1= 99.3056
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.0207 top1= 98.6111
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.0109 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2179 top1= 53.8061

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.0070 top1=100.0000

=== Log global consensus distance @ E28B0 ===
consensus_distance=1.805


[E28B10 |   3872/60000 (  6%) ] Loss: 0.0243 top1= 98.9583
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.0334 top1= 98.9583
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.0520 top1= 98.9583
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.0120 top1= 99.6528
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.0027 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2233 top1= 52.0232

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.0042 top1=100.0000

=== Log global consensus distance @ E29B0 ===
consensus_distance=1.836


[E29B10 |   3872/60000 (  6%) ] Loss: 0.0603 top1= 98.6111
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.0350 top1= 99.3056
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.0161 top1= 99.6528
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.0079 top1= 99.6528
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.0316 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2504 top1= 55.4387

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.0381 top1= 98.9583

=== Log global consensus distance @ E30B0 ===
consensus_distance=1.864


[E30B10 |   3872/60000 (  6%) ] Loss: 0.0123 top1= 99.6528
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.0097 top1=100.0000
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.0345 top1= 97.9167
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.0232 top1= 99.3056
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.0164 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1979 top1= 54.4271

