
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker SGDMWorker(index=9, momentum=0.9)
=> Add worker ByzantineWorker(index=10)
=> Add worker ByzantineWorker(index=11)

=== Start adding graph ===
<codes.graph_utils.RandomSmallWorldGraph object at 0x7fe3ff0d8400>

Train epoch 1
[E 1B0  |    384/60000 (  1%) ] Loss: 2.3054 top1= 10.0000

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([1, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([2, 3, 3, 3, 3], device='cuda:0')
Worker 4 has targets: tensor([3, 4, 4, 4, 4], device='cuda:0')
Worker 5 has targets: tensor([4, 5, 5, 5, 5], device='cuda:0')
Worker 6 has targets: tensor([6, 6, 6, 6, 6], device='cuda:0')
Worker 7 has targets: tensor([7, 7, 7, 7, 7], device='cuda:0')
Worker 8 has targets: tensor([7, 8, 8, 8, 8], device='cuda:0')
Worker 9 has targets: tensor([8, 9, 9, 9, 9], device='cuda:0')
Worker 10 has targets: tensor([4, 8, 8, 6, 9], device='cuda:0')
Worker 11 has targets: tensor([5, 3, 6, 0, 9], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.004



=== Log average shortest path distance for small world @ E1B0 ===
2.7777777777777777


[E 1B10 |   4224/60000 (  7%) ] Loss: 0.1580 top1= 94.6875
[E 1B20 |   8064/60000 ( 13%) ] Loss: 0.0995 top1= 98.1250
[E 1B30 |  11904/60000 ( 20%) ] Loss: 0.0733 top1= 97.5000
[E 1B40 |  15744/60000 ( 26%) ] Loss: 0.0743 top1= 98.4375
[E 1B50 |  19584/60000 ( 33%) ] Loss: 0.0782 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8892 top1= 71.6847

Train epoch 2
[E 2B0  |    384/60000 (  1%) ] Loss: 0.1089 top1= 97.1875

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.126


[E 2B10 |   4224/60000 (  7%) ] Loss: 0.0433 top1= 99.6875
[E 2B20 |   8064/60000 ( 13%) ] Loss: 0.0443 top1= 98.7500
[E 2B30 |  11904/60000 ( 20%) ] Loss: 0.0508 top1= 98.4375
[E 2B40 |  15744/60000 ( 26%) ] Loss: 0.0734 top1= 97.8125
[E 2B50 |  19584/60000 ( 33%) ] Loss: 0.0511 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7736 top1= 77.1334

Train epoch 3
[E 3B0  |    384/60000 (  1%) ] Loss: 0.0792 top1= 98.7500

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.105


[E 3B10 |   4224/60000 (  7%) ] Loss: 0.0407 top1= 99.6875
[E 3B20 |   8064/60000 ( 13%) ] Loss: 0.0354 top1= 99.6875
[E 3B30 |  11904/60000 ( 20%) ] Loss: 0.0455 top1= 98.4375
[E 3B40 |  15744/60000 ( 26%) ] Loss: 0.0637 top1= 98.1250
[E 3B50 |  19584/60000 ( 33%) ] Loss: 0.0459 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7855 top1= 78.2051

Train epoch 4
[E 4B0  |    384/60000 (  1%) ] Loss: 0.0741 top1= 98.7500

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.102


[E 4B10 |   4224/60000 (  7%) ] Loss: 0.0380 top1= 99.6875
[E 4B20 |   8064/60000 ( 13%) ] Loss: 0.0304 top1= 99.6875
[E 4B30 |  11904/60000 ( 20%) ] Loss: 0.0412 top1= 98.7500
[E 4B40 |  15744/60000 ( 26%) ] Loss: 0.0638 top1= 98.4375
[E 4B50 |  19584/60000 ( 33%) ] Loss: 0.0435 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.8525 top1= 78.4355

Train epoch 5
[E 5B0  |    384/60000 (  1%) ] Loss: 0.0781 top1= 98.1250

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.102


[E 5B10 |   4224/60000 (  7%) ] Loss: 0.0386 top1= 99.6875
[E 5B20 |   8064/60000 ( 13%) ] Loss: 0.0335 top1= 99.6875
[E 5B30 |  11904/60000 ( 20%) ] Loss: 0.0471 top1= 98.7500
[E 5B40 |  15744/60000 ( 26%) ] Loss: 0.0671 top1= 98.4375
[E 5B50 |  19584/60000 ( 33%) ] Loss: 0.0477 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9654 top1= 76.9131

Train epoch 6
[E 6B0  |    384/60000 (  1%) ] Loss: 0.0856 top1= 98.1250

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.102


[E 6B10 |   4224/60000 (  7%) ] Loss: 0.0449 top1= 99.6875
[E 6B20 |   8064/60000 ( 13%) ] Loss: 0.0400 top1= 99.3750
[E 6B30 |  11904/60000 ( 20%) ] Loss: 0.0556 top1= 98.4375
[E 6B40 |  15744/60000 ( 26%) ] Loss: 0.0719 top1= 98.4375
[E 6B50 |  19584/60000 ( 33%) ] Loss: 0.0557 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.1027 top1= 74.9399

Train epoch 7
[E 7B0  |    384/60000 (  1%) ] Loss: 0.0970 top1= 97.5000

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.100


[E 7B10 |   4224/60000 (  7%) ] Loss: 0.0553 top1= 99.3750
[E 7B20 |   8064/60000 ( 13%) ] Loss: 0.0476 top1= 98.7500
[E 7B30 |  11904/60000 ( 20%) ] Loss: 0.0602 top1= 98.4375
[E 7B40 |  15744/60000 ( 26%) ] Loss: 0.0777 top1= 98.7500
[E 7B50 |  19584/60000 ( 33%) ] Loss: 0.0628 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2234 top1= 72.5861

Train epoch 8
[E 8B0  |    384/60000 (  1%) ] Loss: 0.1748 top1= 96.2500

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.105


[E 8B10 |   4224/60000 (  7%) ] Loss: 0.0582 top1= 99.3750
[E 8B20 |   8064/60000 ( 13%) ] Loss: 0.0523 top1= 99.3750
[E 8B30 |  11904/60000 ( 20%) ] Loss: 0.0658 top1= 98.4375
[E 8B40 |  15744/60000 ( 26%) ] Loss: 0.0798 top1= 98.4375
[E 8B50 |  19584/60000 ( 33%) ] Loss: 0.0663 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.2723 top1= 70.9736

Train epoch 9
[E 9B0  |    384/60000 (  1%) ] Loss: 0.1160 top1= 97.1875

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.096


[E 9B10 |   4224/60000 (  7%) ] Loss: 0.0732 top1= 99.3750
[E 9B20 |   8064/60000 ( 13%) ] Loss: 0.0539 top1= 98.7500
[E 9B30 |  11904/60000 ( 20%) ] Loss: 0.0609 top1= 98.4375
[E 9B40 |  15744/60000 ( 26%) ] Loss: 0.0826 top1= 98.1250
[E 9B50 |  19584/60000 ( 33%) ] Loss: 0.0647 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3117 top1= 69.9519

Train epoch 10
[E10B0  |    384/60000 (  1%) ] Loss: 0.1295 top1= 97.1875

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.100


[E10B10 |   4224/60000 (  7%) ] Loss: 0.0754 top1= 99.0625
[E10B20 |   8064/60000 ( 13%) ] Loss: 0.0579 top1= 98.4375
[E10B30 |  11904/60000 ( 20%) ] Loss: 0.0719 top1= 98.4375
[E10B40 |  15744/60000 ( 26%) ] Loss: 0.0822 top1= 98.1250
[E10B50 |  19584/60000 ( 33%) ] Loss: 0.0651 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3363 top1= 68.1891

Train epoch 11
[E11B0  |    384/60000 (  1%) ] Loss: 0.1764 top1= 96.5625

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.101


[E11B10 |   4224/60000 (  7%) ] Loss: 0.0624 top1= 99.0625
[E11B20 |   8064/60000 ( 13%) ] Loss: 0.0578 top1= 98.4375
[E11B30 |  11904/60000 ( 20%) ] Loss: 0.0703 top1= 98.4375
[E11B40 |  15744/60000 ( 26%) ] Loss: 0.0832 top1= 98.1250
[E11B50 |  19584/60000 ( 33%) ] Loss: 0.0636 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3492 top1= 67.3077

Train epoch 12
[E12B0  |    384/60000 (  1%) ] Loss: 0.1637 top1= 95.9375

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.100


[E12B10 |   4224/60000 (  7%) ] Loss: 0.0644 top1= 98.7500
[E12B20 |   8064/60000 ( 13%) ] Loss: 0.0609 top1= 98.1250
[E12B30 |  11904/60000 ( 20%) ] Loss: 0.0715 top1= 98.4375
[E12B40 |  15744/60000 ( 26%) ] Loss: 0.0831 top1= 98.1250
[E12B50 |  19584/60000 ( 33%) ] Loss: 0.0638 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3567 top1= 67.4980

Train epoch 13
[E13B0  |    384/60000 (  1%) ] Loss: 0.1385 top1= 96.8750

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.097


[E13B10 |   4224/60000 (  7%) ] Loss: 0.0694 top1= 99.0625
[E13B20 |   8064/60000 ( 13%) ] Loss: 0.0604 top1= 98.7500
[E13B30 |  11904/60000 ( 20%) ] Loss: 0.0687 top1= 98.4375
[E13B40 |  15744/60000 ( 26%) ] Loss: 0.0837 top1= 98.1250
[E13B50 |  19584/60000 ( 33%) ] Loss: 0.0650 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3721 top1= 66.2159

Train epoch 14
[E14B0  |    384/60000 (  1%) ] Loss: 0.1609 top1= 95.9375

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.101


[E14B10 |   4224/60000 (  7%) ] Loss: 0.0632 top1= 98.7500
[E14B20 |   8064/60000 ( 13%) ] Loss: 0.0597 top1= 98.4375
[E14B30 |  11904/60000 ( 20%) ] Loss: 0.0740 top1= 98.4375
[E14B40 |  15744/60000 ( 26%) ] Loss: 0.0816 top1= 98.1250
[E14B50 |  19584/60000 ( 33%) ] Loss: 0.0644 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3829 top1= 65.6851

Train epoch 15
[E15B0  |    384/60000 (  1%) ] Loss: 0.1784 top1= 95.6250

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.105


[E15B10 |   4224/60000 (  7%) ] Loss: 0.0592 top1= 99.0625
[E15B20 |   8064/60000 ( 13%) ] Loss: 0.0564 top1= 99.3750
[E15B30 |  11904/60000 ( 20%) ] Loss: 0.0699 top1= 98.4375
[E15B40 |  15744/60000 ( 26%) ] Loss: 0.0821 top1= 98.1250
[E15B50 |  19584/60000 ( 33%) ] Loss: 0.0637 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3787 top1= 66.3161

Train epoch 16
[E16B0  |    384/60000 (  1%) ] Loss: 0.1598 top1= 95.9375

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.098


[E16B10 |   4224/60000 (  7%) ] Loss: 0.0606 top1= 99.0625
[E16B20 |   8064/60000 ( 13%) ] Loss: 0.0601 top1= 98.7500
[E16B30 |  11904/60000 ( 20%) ] Loss: 0.0717 top1= 98.4375
[E16B40 |  15744/60000 ( 26%) ] Loss: 0.0819 top1= 98.1250
[E16B50 |  19584/60000 ( 33%) ] Loss: 0.0653 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3865 top1= 65.4247

Train epoch 17
[E17B0  |    384/60000 (  1%) ] Loss: 0.1639 top1= 95.9375

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.101


[E17B10 |   4224/60000 (  7%) ] Loss: 0.0614 top1= 99.0625
[E17B20 |   8064/60000 ( 13%) ] Loss: 0.0582 top1= 98.1250
[E17B30 |  11904/60000 ( 20%) ] Loss: 0.0722 top1= 98.1250
[E17B40 |  15744/60000 ( 26%) ] Loss: 0.0805 top1= 98.1250
[E17B50 |  19584/60000 ( 33%) ] Loss: 0.0673 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3927 top1= 64.8738

Train epoch 18
[E18B0  |    384/60000 (  1%) ] Loss: 0.1634 top1= 95.9375

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.103


[E18B10 |   4224/60000 (  7%) ] Loss: 0.0608 top1= 99.3750
[E18B20 |   8064/60000 ( 13%) ] Loss: 0.0565 top1= 99.0625
[E18B30 |  11904/60000 ( 20%) ] Loss: 0.0708 top1= 98.4375
[E18B40 |  15744/60000 ( 26%) ] Loss: 0.0800 top1= 98.1250
[E18B50 |  19584/60000 ( 33%) ] Loss: 0.0669 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3949 top1= 64.9840

Train epoch 19
[E19B0  |    384/60000 (  1%) ] Loss: 0.1649 top1= 95.9375

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.103


[E19B10 |   4224/60000 (  7%) ] Loss: 0.0608 top1= 99.3750
[E19B20 |   8064/60000 ( 13%) ] Loss: 0.0568 top1= 99.0625
[E19B30 |  11904/60000 ( 20%) ] Loss: 0.0709 top1= 98.4375
[E19B40 |  15744/60000 ( 26%) ] Loss: 0.0800 top1= 98.1250
[E19B50 |  19584/60000 ( 33%) ] Loss: 0.0658 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3969 top1= 64.9539

Train epoch 20
[E20B0  |    384/60000 (  1%) ] Loss: 0.1623 top1= 95.9375

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.103


[E20B10 |   4224/60000 (  7%) ] Loss: 0.0615 top1= 99.0625
[E20B20 |   8064/60000 ( 13%) ] Loss: 0.0563 top1= 99.3750
[E20B30 |  11904/60000 ( 20%) ] Loss: 0.0708 top1= 98.4375
[E20B40 |  15744/60000 ( 26%) ] Loss: 0.0797 top1= 98.1250
[E20B50 |  19584/60000 ( 33%) ] Loss: 0.0649 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3978 top1= 65.2444

Train epoch 21
[E21B0  |    384/60000 (  1%) ] Loss: 0.1460 top1= 96.8750

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.103


[E21B10 |   4224/60000 (  7%) ] Loss: 0.0654 top1= 99.0625
[E21B20 |   8064/60000 ( 13%) ] Loss: 0.0556 top1= 99.0625
[E21B30 |  11904/60000 ( 20%) ] Loss: 0.0712 top1= 98.4375
[E21B40 |  15744/60000 ( 26%) ] Loss: 0.0800 top1= 98.1250
[E21B50 |  19584/60000 ( 33%) ] Loss: 0.0639 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.3987 top1= 65.1843

Train epoch 22
[E22B0  |    384/60000 (  1%) ] Loss: 0.1438 top1= 96.8750

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.102


[E22B10 |   4224/60000 (  7%) ] Loss: 0.0664 top1= 99.0625
[E22B20 |   8064/60000 ( 13%) ] Loss: 0.0557 top1= 99.3750
[E22B30 |  11904/60000 ( 20%) ] Loss: 0.0713 top1= 98.4375
[E22B40 |  15744/60000 ( 26%) ] Loss: 0.0796 top1= 98.1250
[E22B50 |  19584/60000 ( 33%) ] Loss: 0.0651 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4006 top1= 65.2644

Train epoch 23
[E23B0  |    384/60000 (  1%) ] Loss: 0.1373 top1= 96.8750

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.103


[E23B10 |   4224/60000 (  7%) ] Loss: 0.0680 top1= 99.0625
[E23B20 |   8064/60000 ( 13%) ] Loss: 0.0550 top1= 99.0625
[E23B30 |  11904/60000 ( 20%) ] Loss: 0.0716 top1= 98.4375
[E23B40 |  15744/60000 ( 26%) ] Loss: 0.0799 top1= 98.1250
[E23B50 |  19584/60000 ( 33%) ] Loss: 0.0638 top1= 98.7500

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4000 top1= 65.4447

Train epoch 24
[E24B0  |    384/60000 (  1%) ] Loss: 0.1384 top1= 96.8750

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.102


[E24B10 |   4224/60000 (  7%) ] Loss: 0.0673 top1= 99.0625
[E24B20 |   8064/60000 ( 13%) ] Loss: 0.0550 top1= 99.0625
[E24B30 |  11904/60000 ( 20%) ] Loss: 0.0706 top1= 98.4375
[E24B40 |  15744/60000 ( 26%) ] Loss: 0.0797 top1= 98.1250
[E24B50 |  19584/60000 ( 33%) ] Loss: 0.0665 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4022 top1= 65.1943

Train epoch 25
[E25B0  |    384/60000 (  1%) ] Loss: 0.1368 top1= 96.8750

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.103


[E25B10 |   4224/60000 (  7%) ] Loss: 0.0679 top1= 99.0625
[E25B20 |   8064/60000 ( 13%) ] Loss: 0.0557 top1= 98.7500
[E25B30 |  11904/60000 ( 20%) ] Loss: 0.0715 top1= 98.4375
[E25B40 |  15744/60000 ( 26%) ] Loss: 0.0797 top1= 98.1250
[E25B50 |  19584/60000 ( 33%) ] Loss: 0.0654 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4017 top1= 65.0741

Train epoch 26
[E26B0  |    384/60000 (  1%) ] Loss: 0.1402 top1= 96.8750

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.102


[E26B10 |   4224/60000 (  7%) ] Loss: 0.0661 top1= 99.0625
[E26B20 |   8064/60000 ( 13%) ] Loss: 0.0557 top1= 98.4375
[E26B30 |  11904/60000 ( 20%) ] Loss: 0.0705 top1= 98.4375
[E26B40 |  15744/60000 ( 26%) ] Loss: 0.0791 top1= 98.1250
[E26B50 |  19584/60000 ( 33%) ] Loss: 0.0682 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4021 top1= 64.7536

Train epoch 27
[E27B0  |    384/60000 (  1%) ] Loss: 0.1469 top1= 96.8750

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.102


[E27B10 |   4224/60000 (  7%) ] Loss: 0.0643 top1= 99.0625
[E27B20 |   8064/60000 ( 13%) ] Loss: 0.0554 top1= 99.0625
[E27B30 |  11904/60000 ( 20%) ] Loss: 0.0708 top1= 98.4375
[E27B40 |  15744/60000 ( 26%) ] Loss: 0.0786 top1= 98.1250
[E27B50 |  19584/60000 ( 33%) ] Loss: 0.0710 top1= 98.4375

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4021 top1= 64.7937

Train epoch 28
[E28B0  |    384/60000 (  1%) ] Loss: 0.1495 top1= 96.5625

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.100


[E28B10 |   4224/60000 (  7%) ] Loss: 0.0620 top1= 99.0625
[E28B20 |   8064/60000 ( 13%) ] Loss: 0.0568 top1= 99.0625
[E28B30 |  11904/60000 ( 20%) ] Loss: 0.0698 top1= 98.1250
[E28B40 |  15744/60000 ( 26%) ] Loss: 0.0785 top1= 98.1250
[E28B50 |  19584/60000 ( 33%) ] Loss: 0.0813 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4048 top1= 65.4147

Train epoch 29
[E29B0  |    384/60000 (  1%) ] Loss: 0.1292 top1= 96.8750

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.097


[E29B10 |   4224/60000 (  7%) ] Loss: 0.0599 top1= 99.0625
[E29B20 |   8064/60000 ( 13%) ] Loss: 0.0579 top1= 98.7500
[E29B30 |  11904/60000 ( 20%) ] Loss: 0.0641 top1= 98.4375
[E29B40 |  15744/60000 ( 26%) ] Loss: 0.0805 top1= 98.1250
[E29B50 |  19584/60000 ( 33%) ] Loss: 0.0923 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4126 top1= 65.1242

Train epoch 30
[E30B0  |    384/60000 (  1%) ] Loss: 0.1228 top1= 96.8750

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.097


[E30B10 |   4224/60000 (  7%) ] Loss: 0.0591 top1= 99.3750
[E30B20 |   8064/60000 ( 13%) ] Loss: 0.0561 top1= 99.0625
[E30B30 |  11904/60000 ( 20%) ] Loss: 0.0649 top1= 98.4375
[E30B40 |  15744/60000 ( 26%) ] Loss: 0.0813 top1= 98.1250
[E30B50 |  19584/60000 ( 33%) ] Loss: 0.0947 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.4126 top1= 65.4748

