Epoch: 0001 train_loss= 1.20320 train_acc= 0.50303 val_loss= 1.00371 val_acc= 0.40984 time= 0.14064
Epoch: 0002 train_loss= 1.36892 train_acc= 0.51515 val_loss= 0.77855 val_acc= 0.39344 time= 0.01563
Epoch: 0003 train_loss= 1.49138 train_acc= 0.46667 val_loss= 0.70106 val_acc= 0.57377 time= 0.00000
Epoch: 0004 train_loss= 1.01020 train_acc= 0.50606 val_loss= 0.76046 val_acc= 0.59016 time= 0.01563
Epoch: 0005 train_loss= 0.87186 train_acc= 0.46970 val_loss= 0.85296 val_acc= 0.62295 time= 0.01563
Epoch: 0006 train_loss= 1.31646 train_acc= 0.48182 val_loss= 0.89678 val_acc= 0.62295 time= 0.00000
Epoch: 0007 train_loss= 1.01192 train_acc= 0.52424 val_loss= 0.91249 val_acc= 0.62295 time= 0.01563
Epoch: 0008 train_loss= 1.25693 train_acc= 0.48182 val_loss= 0.89166 val_acc= 0.62295 time= 0.01563
Epoch: 0009 train_loss= 1.27307 train_acc= 0.50303 val_loss= 0.85355 val_acc= 0.62295 time= 0.00000
Epoch: 0010 train_loss= 1.14294 train_acc= 0.52121 val_loss= 0.81116 val_acc= 0.63934 time= 0.01563
Epoch: 0011 train_loss= 1.02472 train_acc= 0.50606 val_loss= 0.77167 val_acc= 0.63934 time= 0.01563
Epoch: 0012 train_loss= 1.07723 train_acc= 0.51818 val_loss= 0.73582 val_acc= 0.59016 time= 0.00000
Epoch: 0013 train_loss= 0.88226 train_acc= 0.53636 val_loss= 0.71153 val_acc= 0.55738 time= 0.01562
Epoch: 0014 train_loss= 0.90980 train_acc= 0.53333 val_loss= 0.70030 val_acc= 0.55738 time= 0.01563
Epoch: 0015 train_loss= 0.91586 train_acc= 0.53333 val_loss= 0.70394 val_acc= 0.55738 time= 0.00000
Epoch: 0016 train_loss= 1.05134 train_acc= 0.48485 val_loss= 0.71307 val_acc= 0.55738 time= 0.01563
Epoch: 0017 train_loss= 1.03873 train_acc= 0.48788 val_loss= 0.72146 val_acc= 0.49180 time= 0.01562
Epoch: 0018 train_loss= 0.78668 train_acc= 0.51515 val_loss= 0.72806 val_acc= 0.54098 time= 0.00000
Epoch: 0019 train_loss= 0.76030 train_acc= 0.54848 val_loss= 0.72925 val_acc= 0.52459 time= 0.01563
Epoch: 0020 train_loss= 1.07920 train_acc= 0.45455 val_loss= 0.72073 val_acc= 0.57377 time= 0.01563
Epoch: 0021 train_loss= 0.86647 train_acc= 0.50606 val_loss= 0.70946 val_acc= 0.57377 time= 0.00000
Epoch: 0022 train_loss= 0.77846 train_acc= 0.50000 val_loss= 0.69892 val_acc= 0.60656 time= 0.01563
Epoch: 0023 train_loss= 0.77757 train_acc= 0.52727 val_loss= 0.69225 val_acc= 0.62295 time= 0.01563
Epoch: 0024 train_loss= 0.86612 train_acc= 0.49091 val_loss= 0.68428 val_acc= 0.59016 time= 0.01563
Epoch: 0025 train_loss= 0.78922 train_acc= 0.51515 val_loss= 0.67952 val_acc= 0.59016 time= 0.01342
Epoch: 0026 train_loss= 0.74292 train_acc= 0.50606 val_loss= 0.67599 val_acc= 0.62295 time= 0.01513
Epoch: 0027 train_loss= 0.82066 train_acc= 0.50909 val_loss= 0.67431 val_acc= 0.63934 time= 0.02325
Epoch: 0028 train_loss= 0.77945 train_acc= 0.52121 val_loss= 0.67287 val_acc= 0.63934 time= 0.01507
Epoch: 0029 train_loss= 0.78193 train_acc= 0.53636 val_loss= 0.67207 val_acc= 0.63934 time= 0.00701
Epoch: 0030 train_loss= 0.87357 train_acc= 0.51515 val_loss= 0.67244 val_acc= 0.63934 time= 0.01563
Epoch: 0031 train_loss= 0.80042 train_acc= 0.46667 val_loss= 0.67310 val_acc= 0.65574 time= 0.00000
Epoch: 0032 train_loss= 0.75358 train_acc= 0.54545 val_loss= 0.67331 val_acc= 0.63934 time= 0.01563
Epoch: 0033 train_loss= 0.75021 train_acc= 0.51818 val_loss= 0.67362 val_acc= 0.62295 time= 0.01563
Epoch: 0034 train_loss= 0.80220 train_acc= 0.53030 val_loss= 0.67488 val_acc= 0.63934 time= 0.00000
Epoch: 0035 train_loss= 0.75066 train_acc= 0.50606 val_loss= 0.67623 val_acc= 0.62295 time= 0.01562
Early stopping...
Optimization Finished!
Test set results: cost= 0.68427 accuracy= 0.54918 time= 0.00000 
