Epoch: 0001 train_loss= 0.70108 train_acc= 0.47013 val_loss= 0.69619 val_acc= 0.59016 time= 0.34983
Epoch: 0002 train_loss= 0.69805 train_acc= 0.52338 val_loss= 0.69263 val_acc= 0.59016 time= 0.01562
Epoch: 0003 train_loss= 0.69580 train_acc= 0.52597 val_loss= 0.68988 val_acc= 0.59016 time= 0.01563
Epoch: 0004 train_loss= 0.69422 train_acc= 0.52468 val_loss= 0.68771 val_acc= 0.59016 time= 0.00000
Epoch: 0005 train_loss= 0.69347 train_acc= 0.52468 val_loss= 0.68626 val_acc= 0.59016 time= 0.01563
Epoch: 0006 train_loss= 0.69284 train_acc= 0.52468 val_loss= 0.68545 val_acc= 0.59016 time= 0.01563
Epoch: 0007 train_loss= 0.69276 train_acc= 0.52468 val_loss= 0.68529 val_acc= 0.59016 time= 0.01563
Epoch: 0008 train_loss= 0.69269 train_acc= 0.52468 val_loss= 0.68545 val_acc= 0.59016 time= 0.00000
Epoch: 0009 train_loss= 0.69270 train_acc= 0.52597 val_loss= 0.68576 val_acc= 0.59016 time= 0.01563
Epoch: 0010 train_loss= 0.69255 train_acc= 0.52987 val_loss= 0.68619 val_acc= 0.59016 time= 0.01563
Epoch: 0011 train_loss= 0.69252 train_acc= 0.52727 val_loss= 0.68639 val_acc= 0.59016 time= 0.00000
Epoch: 0012 train_loss= 0.69275 train_acc= 0.53117 val_loss= 0.68609 val_acc= 0.59016 time= 0.01563
Epoch: 0013 train_loss= 0.69230 train_acc= 0.53506 val_loss= 0.68543 val_acc= 0.59016 time= 0.01563
Epoch: 0014 train_loss= 0.69210 train_acc= 0.53636 val_loss= 0.68488 val_acc= 0.59016 time= 0.01563
Epoch: 0015 train_loss= 0.69171 train_acc= 0.53766 val_loss= 0.68418 val_acc= 0.59016 time= 0.00000
Epoch: 0016 train_loss= 0.69196 train_acc= 0.53636 val_loss= 0.68372 val_acc= 0.59016 time= 0.01563
Epoch: 0017 train_loss= 0.69132 train_acc= 0.54026 val_loss= 0.68348 val_acc= 0.59016 time= 0.01563
Epoch: 0018 train_loss= 0.69152 train_acc= 0.54026 val_loss= 0.68349 val_acc= 0.59016 time= 0.01563
Epoch: 0019 train_loss= 0.69121 train_acc= 0.54416 val_loss= 0.68344 val_acc= 0.59016 time= 0.00000
Epoch: 0020 train_loss= 0.69082 train_acc= 0.54026 val_loss= 0.68356 val_acc= 0.59016 time= 0.01563
Epoch: 0021 train_loss= 0.69073 train_acc= 0.54416 val_loss= 0.68290 val_acc= 0.59016 time= 0.01563
Epoch: 0022 train_loss= 0.69054 train_acc= 0.55714 val_loss= 0.68163 val_acc= 0.59016 time= 0.00000
Epoch: 0023 train_loss= 0.69044 train_acc= 0.54675 val_loss= 0.68089 val_acc= 0.59016 time= 0.01563
Epoch: 0024 train_loss= 0.69017 train_acc= 0.54935 val_loss= 0.68043 val_acc= 0.59016 time= 0.01563
Epoch: 0025 train_loss= 0.69032 train_acc= 0.54935 val_loss= 0.68042 val_acc= 0.59016 time= 0.01563
Epoch: 0026 train_loss= 0.68976 train_acc= 0.55065 val_loss= 0.68080 val_acc= 0.59016 time= 0.00000
Epoch: 0027 train_loss= 0.68954 train_acc= 0.55584 val_loss= 0.68150 val_acc= 0.59016 time= 0.01563
Epoch: 0028 train_loss= 0.68970 train_acc= 0.55714 val_loss= 0.68201 val_acc= 0.60656 time= 0.01563
Early stopping...
Optimization Finished!
Test set results: cost= 0.69117 accuracy= 0.55738 time= 0.00000 
