
=== Start adding workers ===
=> Add worker SGDMWorker(index=0, momentum=0.9)
=> Add worker SGDMWorker(index=1, momentum=0.9)
=> Add worker SGDMWorker(index=2, momentum=0.9)
=> Add worker SGDMWorker(index=3, momentum=0.9)
=> Add worker SGDMWorker(index=4, momentum=0.9)
=> Add worker SGDMWorker(index=5, momentum=0.9)
=> Add worker SGDMWorker(index=6, momentum=0.9)
=> Add worker SGDMWorker(index=7, momentum=0.9)
=> Add worker SGDMWorker(index=8, momentum=0.9)
=> Add worker SGDMWorker(index=9, momentum=0.9)
=> Add worker ByzantineWorker(index=10)
=> Add worker ByzantineWorker(index=11)

=== Start adding graph ===
<codes.graph_utils.RandomSmallWorldGraph object at 0x7fceee64b400>

Train epoch 1
[E 1B0  |    384/60000 (  1%) ] Loss: 2.3054 top1= 10.0000

=== Peeking data label distribution E1B0 ===
Worker 0 has targets: tensor([0, 0, 0, 0, 0], device='cuda:0')
Worker 1 has targets: tensor([1, 1, 1, 1, 1], device='cuda:0')
Worker 2 has targets: tensor([1, 2, 2, 2, 2], device='cuda:0')
Worker 3 has targets: tensor([2, 3, 3, 3, 3], device='cuda:0')
Worker 4 has targets: tensor([3, 4, 4, 4, 4], device='cuda:0')
Worker 5 has targets: tensor([4, 5, 5, 5, 5], device='cuda:0')
Worker 6 has targets: tensor([6, 6, 6, 6, 6], device='cuda:0')
Worker 7 has targets: tensor([7, 7, 7, 7, 7], device='cuda:0')
Worker 8 has targets: tensor([7, 8, 8, 8, 8], device='cuda:0')
Worker 9 has targets: tensor([8, 9, 9, 9, 9], device='cuda:0')
Worker 10 has targets: tensor([4, 8, 8, 6, 9], device='cuda:0')
Worker 11 has targets: tensor([5, 3, 6, 0, 9], device='cuda:0')



=== Log global consensus distance @ E1B0 ===
consensus_distance=0.951



=== Log average shortest path distance for small world @ E1B0 ===
2.7777777777777777


[E 1B10 |   4224/60000 (  7%) ] Loss: 0.8313 top1= 75.3125
[E 1B20 |   8064/60000 ( 13%) ] Loss: 0.4374 top1= 86.2500
[E 1B30 |  11904/60000 ( 20%) ] Loss: 0.2247 top1= 97.1875
[E 1B40 |  15744/60000 ( 26%) ] Loss: 0.1976 top1= 97.5000
[E 1B50 |  19584/60000 ( 33%) ] Loss: 0.1789 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8468 top1= 54.8177

Train epoch 2
[E 2B0  |    384/60000 (  1%) ] Loss: 0.2662 top1= 95.3125

=== Log global consensus distance @ E2B0 ===
consensus_distance=0.547


[E 2B10 |   4224/60000 (  7%) ] Loss: 0.1678 top1= 98.7500
[E 2B20 |   8064/60000 ( 13%) ] Loss: 0.1683 top1= 98.4375
[E 2B30 |  11904/60000 ( 20%) ] Loss: 0.1514 top1= 97.5000
[E 2B40 |  15744/60000 ( 26%) ] Loss: 0.1775 top1= 98.1250
[E 2B50 |  19584/60000 ( 33%) ] Loss: 0.1565 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8447 top1= 43.8401

Train epoch 3
[E 3B0  |    384/60000 (  1%) ] Loss: 0.2658 top1= 95.0000

=== Log global consensus distance @ E3B0 ===
consensus_distance=0.253


[E 3B10 |   4224/60000 (  7%) ] Loss: 0.1750 top1= 98.1250
[E 3B20 |   8064/60000 ( 13%) ] Loss: 0.1565 top1= 98.1250
[E 3B30 |  11904/60000 ( 20%) ] Loss: 0.1440 top1= 97.8125
[E 3B40 |  15744/60000 ( 26%) ] Loss: 0.1700 top1= 98.1250
[E 3B50 |  19584/60000 ( 33%) ] Loss: 0.1509 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8447 top1= 37.9607

Train epoch 4
[E 4B0  |    384/60000 (  1%) ] Loss: 0.2433 top1= 94.6875

=== Log global consensus distance @ E4B0 ===
consensus_distance=0.236


[E 4B10 |   4224/60000 (  7%) ] Loss: 0.1737 top1= 98.1250
[E 4B20 |   8064/60000 ( 13%) ] Loss: 0.1542 top1= 97.8125
[E 4B30 |  11904/60000 ( 20%) ] Loss: 0.1399 top1= 98.4375
[E 4B40 |  15744/60000 ( 26%) ] Loss: 0.1672 top1= 98.1250
[E 4B50 |  19584/60000 ( 33%) ] Loss: 0.1486 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8483 top1= 39.3229

Train epoch 5
[E 5B0  |    384/60000 (  1%) ] Loss: 0.2370 top1= 95.0000

=== Log global consensus distance @ E5B0 ===
consensus_distance=0.232


[E 5B10 |   4224/60000 (  7%) ] Loss: 0.1708 top1= 98.1250
[E 5B20 |   8064/60000 ( 13%) ] Loss: 0.1536 top1= 97.8125
[E 5B30 |  11904/60000 ( 20%) ] Loss: 0.1373 top1= 98.7500
[E 5B40 |  15744/60000 ( 26%) ] Loss: 0.1653 top1= 98.1250
[E 5B50 |  19584/60000 ( 33%) ] Loss: 0.1477 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8543 top1= 43.1591

Train epoch 6
[E 6B0  |    384/60000 (  1%) ] Loss: 0.2333 top1= 95.6250

=== Log global consensus distance @ E6B0 ===
consensus_distance=0.231


[E 6B10 |   4224/60000 (  7%) ] Loss: 0.1663 top1= 98.4375
[E 6B20 |   8064/60000 ( 13%) ] Loss: 0.1537 top1= 97.8125
[E 6B30 |  11904/60000 ( 20%) ] Loss: 0.1363 top1= 98.7500
[E 6B40 |  15744/60000 ( 26%) ] Loss: 0.1641 top1= 98.1250
[E 6B50 |  19584/60000 ( 33%) ] Loss: 0.1467 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8645 top1= 45.0821

Train epoch 7
[E 7B0  |    384/60000 (  1%) ] Loss: 0.2300 top1= 95.9375

=== Log global consensus distance @ E7B0 ===
consensus_distance=0.230


[E 7B10 |   4224/60000 (  7%) ] Loss: 0.1618 top1= 98.4375
[E 7B20 |   8064/60000 ( 13%) ] Loss: 0.1540 top1= 97.8125
[E 7B30 |  11904/60000 ( 20%) ] Loss: 0.1360 top1= 98.7500
[E 7B40 |  15744/60000 ( 26%) ] Loss: 0.1633 top1= 98.1250
[E 7B50 |  19584/60000 ( 33%) ] Loss: 0.1462 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8756 top1= 44.7917

Train epoch 8
[E 8B0  |    384/60000 (  1%) ] Loss: 0.2291 top1= 95.6250

=== Log global consensus distance @ E8B0 ===
consensus_distance=0.230


[E 8B10 |   4224/60000 (  7%) ] Loss: 0.1606 top1= 98.4375
[E 8B20 |   8064/60000 ( 13%) ] Loss: 0.1531 top1= 97.8125
[E 8B30 |  11904/60000 ( 20%) ] Loss: 0.1357 top1= 98.7500
[E 8B40 |  15744/60000 ( 26%) ] Loss: 0.1629 top1= 98.1250
[E 8B50 |  19584/60000 ( 33%) ] Loss: 0.1461 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8856 top1= 44.1707

Train epoch 9
[E 9B0  |    384/60000 (  1%) ] Loss: 0.2290 top1= 95.6250

=== Log global consensus distance @ E9B0 ===
consensus_distance=0.232


[E 9B10 |   4224/60000 (  7%) ] Loss: 0.1580 top1= 98.4375
[E 9B20 |   8064/60000 ( 13%) ] Loss: 0.1513 top1= 97.8125
[E 9B30 |  11904/60000 ( 20%) ] Loss: 0.1355 top1= 98.7500
[E 9B40 |  15744/60000 ( 26%) ] Loss: 0.1623 top1= 98.1250
[E 9B50 |  19584/60000 ( 33%) ] Loss: 0.1471 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8929 top1= 42.8486

Train epoch 10
[E10B0  |    384/60000 (  1%) ] Loss: 0.2282 top1= 95.9375

=== Log global consensus distance @ E10B0 ===
consensus_distance=0.233


[E10B10 |   4224/60000 (  7%) ] Loss: 0.1617 top1= 98.4375
[E10B20 |   8064/60000 ( 13%) ] Loss: 0.1498 top1= 97.8125
[E10B30 |  11904/60000 ( 20%) ] Loss: 0.1355 top1= 98.7500
[E10B40 |  15744/60000 ( 26%) ] Loss: 0.1623 top1= 98.1250
[E10B50 |  19584/60000 ( 33%) ] Loss: 0.1481 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8968 top1= 41.9671

Train epoch 11
[E11B0  |    384/60000 (  1%) ] Loss: 0.2287 top1= 95.6250

=== Log global consensus distance @ E11B0 ===
consensus_distance=0.234


[E11B10 |   4224/60000 (  7%) ] Loss: 0.1630 top1= 98.4375
[E11B20 |   8064/60000 ( 13%) ] Loss: 0.1480 top1= 97.8125
[E11B30 |  11904/60000 ( 20%) ] Loss: 0.1354 top1= 98.7500
[E11B40 |  15744/60000 ( 26%) ] Loss: 0.1618 top1= 98.4375
[E11B50 |  19584/60000 ( 33%) ] Loss: 0.1487 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8960 top1= 41.4864

Train epoch 12
[E12B0  |    384/60000 (  1%) ] Loss: 0.2311 top1= 95.3125

=== Log global consensus distance @ E12B0 ===
consensus_distance=0.232


[E12B10 |   4224/60000 (  7%) ] Loss: 0.1658 top1= 98.4375
[E12B20 |   8064/60000 ( 13%) ] Loss: 0.1477 top1= 98.1250
[E12B30 |  11904/60000 ( 20%) ] Loss: 0.1347 top1= 98.7500
[E12B40 |  15744/60000 ( 26%) ] Loss: 0.1620 top1= 98.4375
[E12B50 |  19584/60000 ( 33%) ] Loss: 0.1490 top1= 98.1250

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8949 top1= 41.4864

Train epoch 13
[E13B0  |    384/60000 (  1%) ] Loss: 0.2331 top1= 95.0000

=== Log global consensus distance @ E13B0 ===
consensus_distance=0.231


[E13B10 |   4224/60000 (  7%) ] Loss: 0.1684 top1= 98.4375
[E13B20 |   8064/60000 ( 13%) ] Loss: 0.1476 top1= 98.1250
[E13B30 |  11904/60000 ( 20%) ] Loss: 0.1341 top1= 98.7500
[E13B40 |  15744/60000 ( 26%) ] Loss: 0.1620 top1= 98.4375
[E13B50 |  19584/60000 ( 33%) ] Loss: 0.1498 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8944 top1= 41.7568

Train epoch 14
[E14B0  |    384/60000 (  1%) ] Loss: 0.2328 top1= 94.6875

=== Log global consensus distance @ E14B0 ===
consensus_distance=0.230


[E14B10 |   4224/60000 (  7%) ] Loss: 0.1679 top1= 98.4375
[E14B20 |   8064/60000 ( 13%) ] Loss: 0.1482 top1= 98.4375
[E14B30 |  11904/60000 ( 20%) ] Loss: 0.1333 top1= 98.7500
[E14B40 |  15744/60000 ( 26%) ] Loss: 0.1623 top1= 98.4375
[E14B50 |  19584/60000 ( 33%) ] Loss: 0.1506 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8944 top1= 41.7167

Train epoch 15
[E15B0  |    384/60000 (  1%) ] Loss: 0.2318 top1= 94.6875

=== Log global consensus distance @ E15B0 ===
consensus_distance=0.229


[E15B10 |   4224/60000 (  7%) ] Loss: 0.1669 top1= 98.4375
[E15B20 |   8064/60000 ( 13%) ] Loss: 0.1489 top1= 98.7500
[E15B30 |  11904/60000 ( 20%) ] Loss: 0.1329 top1= 98.7500
[E15B40 |  15744/60000 ( 26%) ] Loss: 0.1625 top1= 98.4375
[E15B50 |  19584/60000 ( 33%) ] Loss: 0.1513 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8951 top1= 41.8269

Train epoch 16
[E16B0  |    384/60000 (  1%) ] Loss: 0.2307 top1= 94.6875

=== Log global consensus distance @ E16B0 ===
consensus_distance=0.228


[E16B10 |   4224/60000 (  7%) ] Loss: 0.1666 top1= 98.4375
[E16B20 |   8064/60000 ( 13%) ] Loss: 0.1494 top1= 98.7500
[E16B30 |  11904/60000 ( 20%) ] Loss: 0.1324 top1= 98.7500
[E16B40 |  15744/60000 ( 26%) ] Loss: 0.1625 top1= 98.4375
[E16B50 |  19584/60000 ( 33%) ] Loss: 0.1517 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8963 top1= 41.7768

Train epoch 17
[E17B0  |    384/60000 (  1%) ] Loss: 0.2293 top1= 94.6875

=== Log global consensus distance @ E17B0 ===
consensus_distance=0.227


[E17B10 |   4224/60000 (  7%) ] Loss: 0.1669 top1= 98.4375
[E17B20 |   8064/60000 ( 13%) ] Loss: 0.1496 top1= 98.7500
[E17B30 |  11904/60000 ( 20%) ] Loss: 0.1323 top1= 98.7500
[E17B40 |  15744/60000 ( 26%) ] Loss: 0.1629 top1= 98.4375
[E17B50 |  19584/60000 ( 33%) ] Loss: 0.1530 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8973 top1= 42.0072

Train epoch 18
[E18B0  |    384/60000 (  1%) ] Loss: 0.2274 top1= 95.3125

=== Log global consensus distance @ E18B0 ===
consensus_distance=0.227


[E18B10 |   4224/60000 (  7%) ] Loss: 0.1643 top1= 98.4375
[E18B20 |   8064/60000 ( 13%) ] Loss: 0.1500 top1= 98.7500
[E18B30 |  11904/60000 ( 20%) ] Loss: 0.1321 top1= 99.0625
[E18B40 |  15744/60000 ( 26%) ] Loss: 0.1626 top1= 98.7500
[E18B50 |  19584/60000 ( 33%) ] Loss: 0.1545 top1= 97.8125

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8980 top1= 41.9972

Train epoch 19
[E19B0  |    384/60000 (  1%) ] Loss: 0.2254 top1= 95.3125

=== Log global consensus distance @ E19B0 ===
consensus_distance=0.227


[E19B10 |   4224/60000 (  7%) ] Loss: 0.1628 top1= 98.4375
[E19B20 |   8064/60000 ( 13%) ] Loss: 0.1503 top1= 98.7500
[E19B30 |  11904/60000 ( 20%) ] Loss: 0.1321 top1= 99.0625
[E19B40 |  15744/60000 ( 26%) ] Loss: 0.1627 top1= 98.7500
[E19B50 |  19584/60000 ( 33%) ] Loss: 0.1547 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8993 top1= 42.2075

Train epoch 20
[E20B0  |    384/60000 (  1%) ] Loss: 0.2238 top1= 95.0000

=== Log global consensus distance @ E20B0 ===
consensus_distance=0.226


[E20B10 |   4224/60000 (  7%) ] Loss: 0.1609 top1= 98.4375
[E20B20 |   8064/60000 ( 13%) ] Loss: 0.1508 top1= 98.7500
[E20B30 |  11904/60000 ( 20%) ] Loss: 0.1321 top1= 98.7500
[E20B40 |  15744/60000 ( 26%) ] Loss: 0.1627 top1= 98.4375
[E20B50 |  19584/60000 ( 33%) ] Loss: 0.1553 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8996 top1= 42.3177

Train epoch 21
[E21B0  |    384/60000 (  1%) ] Loss: 0.2226 top1= 95.3125

=== Log global consensus distance @ E21B0 ===
consensus_distance=0.226


[E21B10 |   4224/60000 (  7%) ] Loss: 0.1587 top1= 98.4375
[E21B20 |   8064/60000 ( 13%) ] Loss: 0.1513 top1= 98.7500
[E21B30 |  11904/60000 ( 20%) ] Loss: 0.1323 top1= 98.7500
[E21B40 |  15744/60000 ( 26%) ] Loss: 0.1627 top1= 98.4375
[E21B50 |  19584/60000 ( 33%) ] Loss: 0.1551 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8997 top1= 42.4179

Train epoch 22
[E22B0  |    384/60000 (  1%) ] Loss: 0.2222 top1= 95.3125

=== Log global consensus distance @ E22B0 ===
consensus_distance=0.225


[E22B10 |   4224/60000 (  7%) ] Loss: 0.1584 top1= 98.4375
[E22B20 |   8064/60000 ( 13%) ] Loss: 0.1518 top1= 98.7500
[E22B30 |  11904/60000 ( 20%) ] Loss: 0.1317 top1= 99.0625
[E22B40 |  15744/60000 ( 26%) ] Loss: 0.1630 top1= 98.4375
[E22B50 |  19584/60000 ( 33%) ] Loss: 0.1561 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8993 top1= 42.5781

Train epoch 23
[E23B0  |    384/60000 (  1%) ] Loss: 0.2210 top1= 95.3125

=== Log global consensus distance @ E23B0 ===
consensus_distance=0.226


[E23B10 |   4224/60000 (  7%) ] Loss: 0.1573 top1= 98.4375
[E23B20 |   8064/60000 ( 13%) ] Loss: 0.1511 top1= 98.7500
[E23B30 |  11904/60000 ( 20%) ] Loss: 0.1316 top1= 99.0625
[E23B40 |  15744/60000 ( 26%) ] Loss: 0.1629 top1= 98.4375
[E23B50 |  19584/60000 ( 33%) ] Loss: 0.1570 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8990 top1= 42.6382

Train epoch 24
[E24B0  |    384/60000 (  1%) ] Loss: 0.2204 top1= 95.3125

=== Log global consensus distance @ E24B0 ===
consensus_distance=0.226


[E24B10 |   4224/60000 (  7%) ] Loss: 0.1578 top1= 98.4375
[E24B20 |   8064/60000 ( 13%) ] Loss: 0.1506 top1= 98.7500
[E24B30 |  11904/60000 ( 20%) ] Loss: 0.1314 top1= 99.0625
[E24B40 |  15744/60000 ( 26%) ] Loss: 0.1628 top1= 98.4375
[E24B50 |  19584/60000 ( 33%) ] Loss: 0.1579 top1= 97.1875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8982 top1= 42.6482

Train epoch 25
[E25B0  |    384/60000 (  1%) ] Loss: 0.2199 top1= 95.6250

=== Log global consensus distance @ E25B0 ===
consensus_distance=0.226


[E25B10 |   4224/60000 (  7%) ] Loss: 0.1569 top1= 98.4375
[E25B20 |   8064/60000 ( 13%) ] Loss: 0.1500 top1= 98.7500
[E25B30 |  11904/60000 ( 20%) ] Loss: 0.1313 top1= 99.0625
[E25B40 |  15744/60000 ( 26%) ] Loss: 0.1626 top1= 98.4375
[E25B50 |  19584/60000 ( 33%) ] Loss: 0.1564 top1= 97.1875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8987 top1= 42.6182

Train epoch 26
[E26B0  |    384/60000 (  1%) ] Loss: 0.2192 top1= 95.6250

=== Log global consensus distance @ E26B0 ===
consensus_distance=0.226


[E26B10 |   4224/60000 (  7%) ] Loss: 0.1562 top1= 98.7500
[E26B20 |   8064/60000 ( 13%) ] Loss: 0.1496 top1= 98.7500
[E26B30 |  11904/60000 ( 20%) ] Loss: 0.1311 top1= 99.0625
[E26B40 |  15744/60000 ( 26%) ] Loss: 0.1627 top1= 98.4375
[E26B50 |  19584/60000 ( 33%) ] Loss: 0.1569 top1= 97.1875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8978 top1= 42.8586

Train epoch 27
[E27B0  |    384/60000 (  1%) ] Loss: 0.2187 top1= 95.6250

=== Log global consensus distance @ E27B0 ===
consensus_distance=0.226


[E27B10 |   4224/60000 (  7%) ] Loss: 0.1554 top1= 98.7500
[E27B20 |   8064/60000 ( 13%) ] Loss: 0.1492 top1= 98.7500
[E27B30 |  11904/60000 ( 20%) ] Loss: 0.1308 top1= 99.0625
[E27B40 |  15744/60000 ( 26%) ] Loss: 0.1625 top1= 98.4375
[E27B50 |  19584/60000 ( 33%) ] Loss: 0.1575 top1= 97.1875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8972 top1= 42.8886

Train epoch 28
[E28B0  |    384/60000 (  1%) ] Loss: 0.2187 top1= 95.9375

=== Log global consensus distance @ E28B0 ===
consensus_distance=0.226


[E28B10 |   4224/60000 (  7%) ] Loss: 0.1549 top1= 98.7500
[E28B20 |   8064/60000 ( 13%) ] Loss: 0.1492 top1= 98.7500
[E28B30 |  11904/60000 ( 20%) ] Loss: 0.1307 top1= 99.0625
[E28B40 |  15744/60000 ( 26%) ] Loss: 0.1629 top1= 98.4375
[E28B50 |  19584/60000 ( 33%) ] Loss: 0.1561 top1= 97.5000

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8975 top1= 43.0088

Train epoch 29
[E29B0  |    384/60000 (  1%) ] Loss: 0.2175 top1= 95.6250

=== Log global consensus distance @ E29B0 ===
consensus_distance=0.226


[E29B10 |   4224/60000 (  7%) ] Loss: 0.1544 top1= 98.7500
[E29B20 |   8064/60000 ( 13%) ] Loss: 0.1488 top1= 98.7500
[E29B30 |  11904/60000 ( 20%) ] Loss: 0.1304 top1= 99.0625
[E29B40 |  15744/60000 ( 26%) ] Loss: 0.1625 top1= 98.4375
[E29B50 |  19584/60000 ( 33%) ] Loss: 0.1572 top1= 97.1875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8957 top1= 43.1490

Train epoch 30
[E30B0  |    384/60000 (  1%) ] Loss: 0.2171 top1= 95.6250

=== Log global consensus distance @ E30B0 ===
consensus_distance=0.227


[E30B10 |   4224/60000 (  7%) ] Loss: 0.1544 top1= 98.7500
[E30B20 |   8064/60000 ( 13%) ] Loss: 0.1488 top1= 98.7500
[E30B30 |  11904/60000 ( 20%) ] Loss: 0.1304 top1= 99.0625
[E30B40 |  15744/60000 ( 26%) ] Loss: 0.1625 top1= 98.4375
[E30B50 |  19584/60000 ( 33%) ] Loss: 0.1561 top1= 97.1875

=> Averaged model (Global Average Validation Accuracy) | Eval Loss=1.8954 top1= 43.1991

