
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker ByzantineWorker(index=9)
=> Add worker ByzantineWorker(index=10)

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7f6d7fd18400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([1, 1, 2, 1, 5], device='cuda:0')
Worker 10 has targets: tensor([8, 9, 1, 0, 3], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.162


[E 1B10 |   3872/60000 (  6%) ] Loss: 0.7094 top1= 82.2917
[E 1B20 |   7392/60000 ( 12%) ] Loss: 0.5075 top1= 84.7222
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.3139 top1= 90.9722
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.3021 top1= 90.6250
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.3397 top1= 90.6250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.6818 top1= 52.5040

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.4168 top1= 95.1389

=== Log global consensus distance @ E2B0 ===
consensus_distance=9.559


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.6218 top1= 83.6806
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.6696 top1= 80.9028
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.5745 top1= 85.7639
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.6147 top1= 82.6389
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.6118 top1= 80.5556

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7893 top1= 44.9820

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.5562 top1= 84.3750

=== Log global consensus distance @ E3B0 ===
consensus_distance=16.049


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.5677 top1= 83.6806
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.6424 top1= 79.8611
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.5998 top1= 85.4167
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.6187 top1= 82.6389
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.5369 top1= 81.9444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.7033 top1= 52.0533

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.4863 top1= 85.7639

=== Log global consensus distance @ E4B0 ===
consensus_distance=16.170


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.4763 top1= 86.4583
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.5239 top1= 83.3333
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.4694 top1= 86.4583
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.4961 top1= 84.7222
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.6282 top1= 82.6389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.6258 top1= 53.4756

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.4524 top1= 87.1528

=== Log global consensus distance @ E5B0 ===
consensus_distance=13.787


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.6144 top1= 83.6806
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.5283 top1= 83.3333
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.4268 top1= 86.8056
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.4839 top1= 84.0278
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.4557 top1= 84.7222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.5771 top1= 53.4355

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.4444 top1= 87.5000

=== Log global consensus distance @ E6B0 ===
consensus_distance=12.314


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.4666 top1= 86.1111
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.4982 top1= 85.7639
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.5647 top1= 85.7639
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.4935 top1= 85.4167
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.4857 top1= 84.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.5623 top1= 51.7829

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.4315 top1= 87.1528

=== Log global consensus distance @ E7B0 ===
consensus_distance=11.494


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.4471 top1= 86.8056
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.6537 top1= 80.2083
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.4479 top1= 87.5000
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.4733 top1= 85.0694
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.4606 top1= 85.0694

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4727 top1= 57.3417

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.4220 top1= 87.8472

=== Log global consensus distance @ E8B0 ===
consensus_distance=10.892


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.4615 top1= 85.7639
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.5321 top1= 83.6806
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.5701 top1= 85.7639
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.5306 top1= 83.6806
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.4399 top1= 84.3750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4359 top1= 55.3586

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.3050 top1= 87.8472

=== Log global consensus distance @ E9B0 ===
consensus_distance=11.008


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.2643 top1= 93.7500
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.3034 top1= 89.5833
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.3766 top1= 88.1944
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.2369 top1= 89.5833
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.1489 top1= 96.1806

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3121 top1= 53.8061

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.1288 top1= 98.2639

=== Log global consensus distance @ E10B0 ===
consensus_distance=10.978


[E10B10 |   3872/60000 (  6%) ] Loss: 0.1989 top1= 94.7917
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.0933 top1= 97.5694
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.2383 top1= 94.7917
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.0829 top1= 97.2222
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.1381 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2795 top1= 53.9163

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 0.0681 top1= 97.9167

=== Log global consensus distance @ E11B0 ===
consensus_distance=11.003


[E11B10 |   3872/60000 (  6%) ] Loss: 0.0906 top1= 97.2222
[E11B20 |   7392/60000 ( 12%) ] Loss: 0.0534 top1= 97.5694
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.0954 top1= 96.8750
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.0655 top1= 97.5694
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.0599 top1= 98.2639

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2890 top1= 53.3754

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 0.0552 top1= 98.2639

=== Log global consensus distance @ E12B0 ===
consensus_distance=11.076


[E12B10 |   3872/60000 (  6%) ] Loss: 0.1105 top1= 96.1806
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.1475 top1= 94.7917
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.3524 top1= 95.8333
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.0374 top1= 98.9583
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.1872 top1= 93.4028

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2516 top1= 55.9195

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.0515 top1= 98.2639

=== Log global consensus distance @ E13B0 ===
consensus_distance=11.145


[E13B10 |   3872/60000 (  6%) ] Loss: 0.0414 top1= 98.9583
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.0783 top1= 97.2222
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.3546 top1= 95.1389
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.0493 top1= 98.2639
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.0953 top1= 96.8750

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2308 top1= 57.3017

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.0543 top1= 97.9167

=== Log global consensus distance @ E14B0 ===
consensus_distance=11.203


[E14B10 |   3872/60000 (  6%) ] Loss: 0.1755 top1= 95.1389
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.0732 top1= 98.2639
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.1149 top1= 96.1806
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.0635 top1= 97.9167
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.1242 top1= 95.1389

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3176 top1= 49.9800

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.0676 top1= 98.9583

=== Log global consensus distance @ E15B0 ===
consensus_distance=10.398


[E15B10 |   3872/60000 (  6%) ] Loss: 0.0380 top1= 98.9583
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.0512 top1= 98.2639
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.3774 top1= 96.1806
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.0316 top1= 99.6528
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.1557 top1= 94.4444

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3042 top1= 50.9014

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.0842 top1= 97.2222

=== Log global consensus distance @ E16B0 ===
consensus_distance=10.466


[E16B10 |   3872/60000 (  6%) ] Loss: 0.0468 top1= 98.6111
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.0414 top1= 98.2639
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.0609 top1= 97.9167
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.0463 top1= 98.6111
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.0289 top1= 98.9583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2314 top1= 52.6542

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.0430 top1= 98.6111

=== Log global consensus distance @ E17B0 ===
consensus_distance=10.520


[E17B10 |   3872/60000 (  6%) ] Loss: 0.0428 top1= 98.6111
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.0335 top1= 99.6528
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.0623 top1= 97.9167
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.0274 top1= 99.3056
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.0314 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1607 top1= 55.9495

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.0458 top1= 98.9583

=== Log global consensus distance @ E18B0 ===
consensus_distance=10.543


[E18B10 |   3872/60000 (  6%) ] Loss: 0.0389 top1= 98.6111
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.0265 top1= 99.3056
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.0371 top1= 98.6111
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.0399 top1= 98.9583
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.0299 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2069 top1= 54.4071

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.0267 top1= 99.3056

=== Log global consensus distance @ E19B0 ===
consensus_distance=10.598


[E19B10 |   3872/60000 (  6%) ] Loss: 0.0415 top1= 98.6111
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.1043 top1= 96.8750
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.0268 top1= 98.9583
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.0194 top1= 99.6528
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.0536 top1= 98.2639

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1700 top1= 55.3085

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.0201 top1= 99.3056

=== Log global consensus distance @ E20B0 ===
consensus_distance=10.656


[E20B10 |   3872/60000 (  6%) ] Loss: 0.0387 top1= 98.6111
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.0823 top1= 97.5694
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.0257 top1= 99.3056
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.0222 top1= 98.9583
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.0132 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1827 top1= 55.5188

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.0294 top1= 98.9583

=== Log global consensus distance @ E21B0 ===
consensus_distance=10.711


[E21B10 |   3872/60000 (  6%) ] Loss: 0.0461 top1= 98.6111
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.0892 top1= 97.2222
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.0419 top1= 98.2639
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.0230 top1= 99.3056
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.0144 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1826 top1= 56.0797

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.0222 top1= 99.3056

=== Log global consensus distance @ E22B0 ===
consensus_distance=10.743


[E22B10 |   3872/60000 (  6%) ] Loss: 0.0390 top1= 98.9583
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.0185 top1= 99.6528
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.4509 top1= 97.2222
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.0190 top1= 99.6528
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.0126 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1273 top1= 58.3233

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.0209 top1= 99.3056

=== Log global consensus distance @ E23B0 ===
consensus_distance=10.788


[E23B10 |   3872/60000 (  6%) ] Loss: 0.0142 top1= 99.6528
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.1490 top1= 94.7917
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.0197 top1= 99.3056
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.0382 top1= 97.9167
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.1306 top1= 95.8333

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0882 top1= 66.3962

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.0342 top1= 99.3056

=== Log global consensus distance @ E24B0 ===
consensus_distance=11.274


[E24B10 |   3872/60000 (  6%) ] Loss: 0.0688 top1= 98.9583
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.0439 top1= 98.6111
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.3453 top1= 95.8333
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.0377 top1= 98.9583
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.0331 top1= 98.9583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9712 top1= 68.3694

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.0863 top1= 98.2639

=== Log global consensus distance @ E25B0 ===
consensus_distance=11.205


[E25B10 |   3872/60000 (  6%) ] Loss: 0.0611 top1= 98.2639
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.0493 top1= 98.2639
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.0957 top1= 97.2222
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.0242 top1= 99.3056
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.0216 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0443 top1= 65.6751

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.0258 top1= 98.6111

=== Log global consensus distance @ E26B0 ===
consensus_distance=11.243


[E26B10 |   3872/60000 (  6%) ] Loss: 0.0494 top1= 98.2639
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.0215 top1= 99.6528
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.0207 top1= 99.3056
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.0112 top1= 99.6528
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.0217 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0534 top1= 64.5733

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.0302 top1= 98.2639

=== Log global consensus distance @ E27B0 ===
consensus_distance=11.302


[E27B10 |   3872/60000 (  6%) ] Loss: 0.0439 top1= 97.9167
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.0216 top1= 99.3056
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.0235 top1= 99.6528
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.0198 top1= 99.6528
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.0264 top1= 98.9583

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9997 top1= 67.6983

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.0232 top1= 98.6111

=== Log global consensus distance @ E28B0 ===
consensus_distance=11.359


[E28B10 |   3872/60000 (  6%) ] Loss: 0.0483 top1= 97.9167
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.0170 top1= 99.6528
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.2554 top1= 96.8750
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.0157 top1= 99.6528
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.1328 top1= 95.8333

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0401 top1= 65.5749

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.0260 top1= 99.3056

=== Log global consensus distance @ E29B0 ===
consensus_distance=11.381


[E29B10 |   3872/60000 (  6%) ] Loss: 0.0227 top1= 98.9583
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.0199 top1=100.0000
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.4742 top1= 97.5694
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.0079 top1=100.0000
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.1418 top1= 95.4861

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0003 top1= 66.8670

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.0102 top1= 99.6528

=== Log global consensus distance @ E30B0 ===
consensus_distance=11.426


[E30B10 |   3872/60000 (  6%) ] Loss: 0.0189 top1= 99.3056
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.0096 top1=100.0000
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.3570 top1= 97.5694
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.0850 top1= 97.2222
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.0231 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.0626 top1= 67.2276

