
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker ByzantineWorker(index=9)
=> Add worker ByzantineWorker(index=10)

=== Start adding graph ===
<codes.graph_utils.TorusByzantineGraph object at 0x7f7318fd2400>

Train epoch 1
[E 1B0  |    352/60000 (  1%) ] Loss: 2.3038 top1= 11.1111

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([2, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([4, 3, 3, 4, 3], device='cuda:0')
Worker 4 has targets: tensor([5, 4, 4, 5, 4], device='cuda:0')
Worker 5 has targets: tensor([6, 6, 6, 6, 5], device='cuda:0')
Worker 6 has targets: tensor([7, 7, 7, 7, 6], device='cuda:0')
Worker 7 has targets: tensor([8, 8, 8, 8, 7], device='cuda:0')
Worker 8 has targets: tensor([9, 9, 9, 9, 8], device='cuda:0')
Worker 9 has targets: tensor([1, 1, 2, 1, 5], device='cuda:0')
Worker 10 has targets: tensor([8, 9, 1, 0, 3], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.000


[E 1B10 |   3872/60000 (  6%) ] Loss: 0.3258 top1= 83.3333
[E 1B20 |   7392/60000 ( 12%) ] Loss: 0.1545 top1= 95.1389
[E 1B30 |  10912/60000 ( 18%) ] Loss: 0.1043 top1= 96.5278
[E 1B40 |  14432/60000 ( 24%) ] Loss: 0.0711 top1= 98.9583
[E 1B50 |  17952/60000 ( 30%) ] Loss: 0.0873 top1= 97.2222

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.9383 top1= 71.6947

Train epoch 2
[E 2B0  |    352/60000 (  1%) ] Loss: 0.0497 top1= 98.6111

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.105


[E 2B10 |   3872/60000 (  6%) ] Loss: 0.0773 top1= 98.2639
[E 2B20 |   7392/60000 ( 12%) ] Loss: 0.0859 top1= 97.5694
[E 2B30 |  10912/60000 ( 18%) ] Loss: 0.0714 top1= 98.2639
[E 2B40 |  14432/60000 ( 24%) ] Loss: 0.0423 top1= 99.3056
[E 2B50 |  17952/60000 ( 30%) ] Loss: 0.0550 top1= 98.2639

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.6463 top1= 77.6042

Train epoch 3
[E 3B0  |    352/60000 (  1%) ] Loss: 0.0384 top1= 99.3056

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.081


[E 3B10 |   3872/60000 (  6%) ] Loss: 0.0649 top1= 98.6111
[E 3B20 |   7392/60000 ( 12%) ] Loss: 0.0527 top1= 98.2639
[E 3B30 |  10912/60000 ( 18%) ] Loss: 0.0482 top1= 98.2639
[E 3B40 |  14432/60000 ( 24%) ] Loss: 0.0289 top1= 98.9583
[E 3B50 |  17952/60000 ( 30%) ] Loss: 0.0256 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.5234 top1= 80.9696

Train epoch 4
[E 4B0  |    352/60000 (  1%) ] Loss: 0.0434 top1= 99.3056

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.056


[E 4B10 |   3872/60000 (  6%) ] Loss: 0.0566 top1= 98.2639
[E 4B20 |   7392/60000 ( 12%) ] Loss: 0.0361 top1= 99.6528
[E 4B30 |  10912/60000 ( 18%) ] Loss: 0.0282 top1= 98.9583
[E 4B40 |  14432/60000 ( 24%) ] Loss: 0.0184 top1=100.0000
[E 4B50 |  17952/60000 ( 30%) ] Loss: 0.0217 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4397 top1= 84.5954

Train epoch 5
[E 5B0  |    352/60000 (  1%) ] Loss: 0.0117 top1=100.0000

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.028


[E 5B10 |   3872/60000 (  6%) ] Loss: 0.0351 top1= 99.6528
[E 5B20 |   7392/60000 ( 12%) ] Loss: 0.0222 top1= 99.6528
[E 5B30 |  10912/60000 ( 18%) ] Loss: 0.0174 top1= 99.3056
[E 5B40 |  14432/60000 ( 24%) ] Loss: 0.0216 top1= 99.6528
[E 5B50 |  17952/60000 ( 30%) ] Loss: 0.0133 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3964 top1= 86.4083

Train epoch 6
[E 6B0  |    352/60000 (  1%) ] Loss: 0.0166 top1= 99.6528

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.019


[E 6B10 |   3872/60000 (  6%) ] Loss: 0.0299 top1= 98.9583
[E 6B20 |   7392/60000 ( 12%) ] Loss: 0.0229 top1= 99.3056
[E 6B30 |  10912/60000 ( 18%) ] Loss: 0.0239 top1= 99.3056
[E 6B40 |  14432/60000 ( 24%) ] Loss: 0.0112 top1= 99.6528
[E 6B50 |  17952/60000 ( 30%) ] Loss: 0.0128 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.4031 top1= 85.6370

Train epoch 7
[E 7B0  |    352/60000 (  1%) ] Loss: 0.0213 top1= 99.3056

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.015


[E 7B10 |   3872/60000 (  6%) ] Loss: 0.0199 top1= 99.6528
[E 7B20 |   7392/60000 ( 12%) ] Loss: 0.0167 top1=100.0000
[E 7B30 |  10912/60000 ( 18%) ] Loss: 0.0983 top1= 98.9583
[E 7B40 |  14432/60000 ( 24%) ] Loss: 0.0139 top1=100.0000
[E 7B50 |  17952/60000 ( 30%) ] Loss: 0.0299 top1= 99.3056

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3784 top1= 87.4099

Train epoch 8
[E 8B0  |    352/60000 (  1%) ] Loss: 0.0312 top1= 98.9583

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.079


[E 8B10 |   3872/60000 (  6%) ] Loss: 0.0199 top1= 99.3056
[E 8B20 |   7392/60000 ( 12%) ] Loss: 0.0120 top1=100.0000
[E 8B30 |  10912/60000 ( 18%) ] Loss: 0.0274 top1= 99.3056
[E 8B40 |  14432/60000 ( 24%) ] Loss: 0.0420 top1= 99.6528
[E 8B50 |  17952/60000 ( 30%) ] Loss: 0.0104 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3619 top1= 87.8405

Train epoch 9
[E 9B0  |    352/60000 (  1%) ] Loss: 0.0114 top1=100.0000

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.019


[E 9B10 |   3872/60000 (  6%) ] Loss: 0.0171 top1=100.0000
[E 9B20 |   7392/60000 ( 12%) ] Loss: 0.0178 top1=100.0000
[E 9B30 |  10912/60000 ( 18%) ] Loss: 0.0115 top1=100.0000
[E 9B40 |  14432/60000 ( 24%) ] Loss: 0.0095 top1=100.0000
[E 9B50 |  17952/60000 ( 30%) ] Loss: 0.0060 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3338 top1= 89.0825

Train epoch 10
[E10B0  |    352/60000 (  1%) ] Loss: 0.0097 top1=100.0000

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.010


[E10B10 |   3872/60000 (  6%) ] Loss: 0.0138 top1=100.0000
[E10B20 |   7392/60000 ( 12%) ] Loss: 0.0261 top1= 99.6528
[E10B30 |  10912/60000 ( 18%) ] Loss: 0.0199 top1= 99.3056
[E10B40 |  14432/60000 ( 24%) ] Loss: 0.0118 top1= 99.6528
[E10B50 |  17952/60000 ( 30%) ] Loss: 0.0068 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3352 top1= 89.2528

Train epoch 11
[E11B0  |    352/60000 (  1%) ] Loss: 0.0112 top1=100.0000

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.008


[E11B10 |   3872/60000 (  6%) ] Loss: 0.0153 top1=100.0000
[E11B20 |   7392/60000 ( 12%) ] Loss: 0.0100 top1=100.0000
[E11B30 |  10912/60000 ( 18%) ] Loss: 0.0089 top1= 99.6528
[E11B40 |  14432/60000 ( 24%) ] Loss: 0.0099 top1= 99.6528
[E11B50 |  17952/60000 ( 30%) ] Loss: 0.0113 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3702 top1= 87.3397

Train epoch 12
[E12B0  |    352/60000 (  1%) ] Loss: 0.0046 top1=100.0000

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.017


[E12B10 |   3872/60000 (  6%) ] Loss: 0.0144 top1=100.0000
[E12B20 |   7392/60000 ( 12%) ] Loss: 0.0073 top1=100.0000
[E12B30 |  10912/60000 ( 18%) ] Loss: 0.0059 top1=100.0000
[E12B40 |  14432/60000 ( 24%) ] Loss: 0.0068 top1=100.0000
[E12B50 |  17952/60000 ( 30%) ] Loss: 0.0080 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2911 top1= 90.8053

Train epoch 13
[E13B0  |    352/60000 (  1%) ] Loss: 0.0067 top1=100.0000

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.010


[E13B10 |   3872/60000 (  6%) ] Loss: 0.1197 top1= 98.6111
[E13B20 |   7392/60000 ( 12%) ] Loss: 0.0401 top1= 98.9583
[E13B30 |  10912/60000 ( 18%) ] Loss: 0.0785 top1= 97.9167
[E13B40 |  14432/60000 ( 24%) ] Loss: 0.0084 top1=100.0000
[E13B50 |  17952/60000 ( 30%) ] Loss: 0.0086 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3071 top1= 89.9139

Train epoch 14
[E14B0  |    352/60000 (  1%) ] Loss: 0.0085 top1=100.0000

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.042


[E14B10 |   3872/60000 (  6%) ] Loss: 0.0177 top1= 99.6528
[E14B20 |   7392/60000 ( 12%) ] Loss: 0.0093 top1=100.0000
[E14B30 |  10912/60000 ( 18%) ] Loss: 0.0123 top1= 99.6528
[E14B40 |  14432/60000 ( 24%) ] Loss: 0.0170 top1= 99.6528
[E14B50 |  17952/60000 ( 30%) ] Loss: 0.0073 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2792 top1= 91.7368

Train epoch 15
[E15B0  |    352/60000 (  1%) ] Loss: 0.0065 top1=100.0000

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.007


[E15B10 |   3872/60000 (  6%) ] Loss: 0.0097 top1=100.0000
[E15B20 |   7392/60000 ( 12%) ] Loss: 0.0124 top1=100.0000
[E15B30 |  10912/60000 ( 18%) ] Loss: 0.0115 top1= 99.6528
[E15B40 |  14432/60000 ( 24%) ] Loss: 0.0087 top1=100.0000
[E15B50 |  17952/60000 ( 30%) ] Loss: 0.0065 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2648 top1= 91.9571

Train epoch 16
[E16B0  |    352/60000 (  1%) ] Loss: 0.0053 top1=100.0000

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.006


[E16B10 |   3872/60000 (  6%) ] Loss: 0.0072 top1=100.0000
[E16B20 |   7392/60000 ( 12%) ] Loss: 0.0077 top1=100.0000
[E16B30 |  10912/60000 ( 18%) ] Loss: 0.0096 top1=100.0000
[E16B40 |  14432/60000 ( 24%) ] Loss: 0.0063 top1=100.0000
[E16B50 |  17952/60000 ( 30%) ] Loss: 0.0031 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2636 top1= 92.1074

Train epoch 17
[E17B0  |    352/60000 (  1%) ] Loss: 0.0044 top1=100.0000

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.005


[E17B10 |   3872/60000 (  6%) ] Loss: 0.0046 top1=100.0000
[E17B20 |   7392/60000 ( 12%) ] Loss: 0.0057 top1=100.0000
[E17B30 |  10912/60000 ( 18%) ] Loss: 0.0056 top1=100.0000
[E17B40 |  14432/60000 ( 24%) ] Loss: 0.0051 top1=100.0000
[E17B50 |  17952/60000 ( 30%) ] Loss: 0.0049 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2628 top1= 92.0673

Train epoch 18
[E18B0  |    352/60000 (  1%) ] Loss: 0.0052 top1=100.0000

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.006


[E18B10 |   3872/60000 (  6%) ] Loss: 0.0107 top1=100.0000
[E18B20 |   7392/60000 ( 12%) ] Loss: 0.0142 top1= 99.6528
[E18B30 |  10912/60000 ( 18%) ] Loss: 0.0051 top1=100.0000
[E18B40 |  14432/60000 ( 24%) ] Loss: 0.0039 top1=100.0000
[E18B50 |  17952/60000 ( 30%) ] Loss: 0.0061 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3000 top1= 90.7552

Train epoch 19
[E19B0  |    352/60000 (  1%) ] Loss: 0.0060 top1=100.0000

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.008


[E19B10 |   3872/60000 (  6%) ] Loss: 0.0103 top1=100.0000
[E19B20 |   7392/60000 ( 12%) ] Loss: 0.0089 top1= 99.6528
[E19B30 |  10912/60000 ( 18%) ] Loss: 0.0725 top1= 96.8750
[E19B40 |  14432/60000 ( 24%) ] Loss: 0.0198 top1= 98.9583
[E19B50 |  17952/60000 ( 30%) ] Loss: 0.0653 top1= 98.6111

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.7125 top1= 78.0649

Train epoch 20
[E20B0  |    352/60000 (  1%) ] Loss: 0.0174 top1= 98.9583

=== Log global consensus distance @ E20B0 ===
consensus_distance=3.016


[E20B10 |   3872/60000 (  6%) ] Loss: 0.1227 top1= 96.5278
[E20B20 |   7392/60000 ( 12%) ] Loss: 0.0187 top1=100.0000
[E20B30 |  10912/60000 ( 18%) ] Loss: 0.0338 top1= 98.6111
[E20B40 |  14432/60000 ( 24%) ] Loss: 0.0217 top1= 99.3056
[E20B50 |  17952/60000 ( 30%) ] Loss: 0.0173 top1= 99.6528

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.3610 top1= 90.5649

Train epoch 21
[E21B0  |    352/60000 (  1%) ] Loss: 0.0204 top1= 99.6528

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.126


[E21B10 |   3872/60000 (  6%) ] Loss: 0.0435 top1= 99.3056
[E21B20 |   7392/60000 ( 12%) ] Loss: 0.0129 top1=100.0000
[E21B30 |  10912/60000 ( 18%) ] Loss: 0.0136 top1=100.0000
[E21B40 |  14432/60000 ( 24%) ] Loss: 0.0123 top1=100.0000
[E21B50 |  17952/60000 ( 30%) ] Loss: 0.0045 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2796 top1= 91.1659

Train epoch 22
[E22B0  |    352/60000 (  1%) ] Loss: 0.0106 top1=100.0000

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.007


[E22B10 |   3872/60000 (  6%) ] Loss: 0.0191 top1= 99.3056
[E22B20 |   7392/60000 ( 12%) ] Loss: 0.0126 top1=100.0000
[E22B30 |  10912/60000 ( 18%) ] Loss: 0.0079 top1=100.0000
[E22B40 |  14432/60000 ( 24%) ] Loss: 0.0061 top1=100.0000
[E22B50 |  17952/60000 ( 30%) ] Loss: 0.0042 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2723 top1= 91.5164

Train epoch 23
[E23B0  |    352/60000 (  1%) ] Loss: 0.0072 top1=100.0000

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.006


[E23B10 |   3872/60000 (  6%) ] Loss: 0.0107 top1=100.0000
[E23B20 |   7392/60000 ( 12%) ] Loss: 0.0117 top1=100.0000
[E23B30 |  10912/60000 ( 18%) ] Loss: 0.0075 top1=100.0000
[E23B40 |  14432/60000 ( 24%) ] Loss: 0.0084 top1=100.0000
[E23B50 |  17952/60000 ( 30%) ] Loss: 0.0045 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2610 top1= 92.1775

Train epoch 24
[E24B0  |    352/60000 (  1%) ] Loss: 0.0073 top1=100.0000

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.004


[E24B10 |   3872/60000 (  6%) ] Loss: 0.0119 top1=100.0000
[E24B20 |   7392/60000 ( 12%) ] Loss: 0.0079 top1=100.0000
[E24B30 |  10912/60000 ( 18%) ] Loss: 0.0071 top1=100.0000
[E24B40 |  14432/60000 ( 24%) ] Loss: 0.0067 top1=100.0000
[E24B50 |  17952/60000 ( 30%) ] Loss: 0.0043 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2628 top1= 92.3077

Train epoch 25
[E25B0  |    352/60000 (  1%) ] Loss: 0.0067 top1=100.0000

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.003


[E25B10 |   3872/60000 (  6%) ] Loss: 0.0084 top1=100.0000
[E25B20 |   7392/60000 ( 12%) ] Loss: 0.0084 top1=100.0000
[E25B30 |  10912/60000 ( 18%) ] Loss: 0.0073 top1=100.0000
[E25B40 |  14432/60000 ( 24%) ] Loss: 0.0066 top1=100.0000
[E25B50 |  17952/60000 ( 30%) ] Loss: 0.0043 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2547 top1= 92.3377

Train epoch 26
[E26B0  |    352/60000 (  1%) ] Loss: 0.0063 top1=100.0000

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.003


[E26B10 |   3872/60000 (  6%) ] Loss: 0.0075 top1=100.0000
[E26B20 |   7392/60000 ( 12%) ] Loss: 0.0066 top1=100.0000
[E26B30 |  10912/60000 ( 18%) ] Loss: 0.0091 top1=100.0000
[E26B40 |  14432/60000 ( 24%) ] Loss: 0.0066 top1=100.0000
[E26B50 |  17952/60000 ( 30%) ] Loss: 0.0042 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2570 top1= 92.4179

Train epoch 27
[E27B0  |    352/60000 (  1%) ] Loss: 0.0071 top1=100.0000

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.002


[E27B10 |   3872/60000 (  6%) ] Loss: 0.0076 top1=100.0000
[E27B20 |   7392/60000 ( 12%) ] Loss: 0.0073 top1=100.0000
[E27B30 |  10912/60000 ( 18%) ] Loss: 0.0075 top1=100.0000
[E27B40 |  14432/60000 ( 24%) ] Loss: 0.0056 top1=100.0000
[E27B50 |  17952/60000 ( 30%) ] Loss: 0.0035 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2536 top1= 92.4980

Train epoch 28
[E28B0  |    352/60000 (  1%) ] Loss: 0.0051 top1=100.0000

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.002


[E28B10 |   3872/60000 (  6%) ] Loss: 0.0072 top1=100.0000
[E28B20 |   7392/60000 ( 12%) ] Loss: 0.0069 top1=100.0000
[E28B30 |  10912/60000 ( 18%) ] Loss: 0.0065 top1=100.0000
[E28B40 |  14432/60000 ( 24%) ] Loss: 0.0054 top1=100.0000
[E28B50 |  17952/60000 ( 30%) ] Loss: 0.0034 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2532 top1= 92.5381

Train epoch 29
[E29B0  |    352/60000 (  1%) ] Loss: 0.0051 top1=100.0000

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.002


[E29B10 |   3872/60000 (  6%) ] Loss: 0.0073 top1=100.0000
[E29B20 |   7392/60000 ( 12%) ] Loss: 0.0061 top1=100.0000
[E29B30 |  10912/60000 ( 18%) ] Loss: 0.0080 top1=100.0000
[E29B40 |  14432/60000 ( 24%) ] Loss: 0.0054 top1=100.0000
[E29B50 |  17952/60000 ( 30%) ] Loss: 0.0035 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2550 top1= 92.4579

Train epoch 30
[E30B0  |    352/60000 (  1%) ] Loss: 0.0052 top1=100.0000

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.002


[E30B10 |   3872/60000 (  6%) ] Loss: 0.0067 top1=100.0000
[E30B20 |   7392/60000 ( 12%) ] Loss: 0.0067 top1=100.0000
[E30B30 |  10912/60000 ( 18%) ] Loss: 0.0075 top1=100.0000
[E30B40 |  14432/60000 ( 24%) ] Loss: 0.0048 top1=100.0000
[E30B50 |  17952/60000 ( 30%) ] Loss: 0.0033 top1=100.0000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=0.2566 top1= 92.3077

