Epoch: 0001 train_loss= 0.70074 train_acc= 0.53636 val_loss= 0.69789 val_acc= 0.52459 time= 0.17189
Epoch: 0002 train_loss= 0.69759 train_acc= 0.57576 val_loss= 0.69528 val_acc= 0.63934 time= 0.01563
Epoch: 0003 train_loss= 0.69496 train_acc= 0.63030 val_loss= 0.69343 val_acc= 0.63934 time= 0.01563
Epoch: 0004 train_loss= 0.69285 train_acc= 0.66364 val_loss= 0.69199 val_acc= 0.60656 time= 0.00000
Epoch: 0005 train_loss= 0.69142 train_acc= 0.67879 val_loss= 0.69093 val_acc= 0.62295 time= 0.01563
Epoch: 0006 train_loss= 0.68939 train_acc= 0.69697 val_loss= 0.69025 val_acc= 0.60656 time= 0.00000
Epoch: 0007 train_loss= 0.68865 train_acc= 0.68788 val_loss= 0.68975 val_acc= 0.60656 time= 0.01563
Epoch: 0008 train_loss= 0.68773 train_acc= 0.68788 val_loss= 0.68934 val_acc= 0.63934 time= 0.00000
Epoch: 0009 train_loss= 0.68651 train_acc= 0.67273 val_loss= 0.68904 val_acc= 0.63934 time= 0.01563
Epoch: 0010 train_loss= 0.68527 train_acc= 0.68182 val_loss= 0.68863 val_acc= 0.63934 time= 0.00000
Epoch: 0011 train_loss= 0.68400 train_acc= 0.70303 val_loss= 0.68801 val_acc= 0.63934 time= 0.01563
Epoch: 0012 train_loss= 0.68456 train_acc= 0.66667 val_loss= 0.68728 val_acc= 0.60656 time= 0.01563
Epoch: 0013 train_loss= 0.68373 train_acc= 0.71515 val_loss= 0.68672 val_acc= 0.60656 time= 0.00000
Epoch: 0014 train_loss= 0.68284 train_acc= 0.67879 val_loss= 0.68641 val_acc= 0.63934 time= 0.01563
Epoch: 0015 train_loss= 0.68121 train_acc= 0.70909 val_loss= 0.68625 val_acc= 0.63934 time= 0.00000
Epoch: 0016 train_loss= 0.68038 train_acc= 0.71515 val_loss= 0.68597 val_acc= 0.63934 time= 0.01563
Epoch: 0017 train_loss= 0.68057 train_acc= 0.67273 val_loss= 0.68533 val_acc= 0.63934 time= 0.01563
Epoch: 0018 train_loss= 0.67724 train_acc= 0.71515 val_loss= 0.68455 val_acc= 0.63934 time= 0.00000
Epoch: 0019 train_loss= 0.67459 train_acc= 0.68182 val_loss= 0.68392 val_acc= 0.63934 time= 0.01563
Epoch: 0020 train_loss= 0.67504 train_acc= 0.70606 val_loss= 0.68328 val_acc= 0.62295 time= 0.00000
Epoch: 0021 train_loss= 0.67693 train_acc= 0.66970 val_loss= 0.68230 val_acc= 0.60656 time= 0.01563
Epoch: 0022 train_loss= 0.67246 train_acc= 0.70606 val_loss= 0.68128 val_acc= 0.65574 time= 0.01563
Epoch: 0023 train_loss= 0.67217 train_acc= 0.72121 val_loss= 0.68078 val_acc= 0.67213 time= 0.00000
Epoch: 0024 train_loss= 0.67119 train_acc= 0.70909 val_loss= 0.68027 val_acc= 0.67213 time= 0.01563
Epoch: 0025 train_loss= 0.67478 train_acc= 0.73939 val_loss= 0.68001 val_acc= 0.62295 time= 0.00000
Epoch: 0026 train_loss= 0.66773 train_acc= 0.68788 val_loss= 0.67989 val_acc= 0.60656 time= 0.01563
Epoch: 0027 train_loss= 0.67341 train_acc= 0.67273 val_loss= 0.67961 val_acc= 0.62295 time= 0.00000
Epoch: 0028 train_loss= 0.66716 train_acc= 0.70303 val_loss= 0.67985 val_acc= 0.63934 time= 0.01563
Epoch: 0029 train_loss= 0.66701 train_acc= 0.72121 val_loss= 0.68082 val_acc= 0.63934 time= 0.01562
Epoch: 0030 train_loss= 0.66560 train_acc= 0.71515 val_loss= 0.68194 val_acc= 0.55738 time= 0.00000
Early stopping...
Optimization Finished!
Test set results: cost= 0.66608 accuracy= 0.68033 time= 0.00000 
