Epoch: 0001 train_loss= 1.72062 train_acc= 0.47273 val_loss= 1.19506 val_acc= 0.49180 time= 0.18751
Epoch: 0002 train_loss= 1.15614 train_acc= 0.46364 val_loss= 1.05596 val_acc= 0.39344 time= 0.01562
Epoch: 0003 train_loss= 0.76547 train_acc= 0.49273 val_loss= 1.00609 val_acc= 0.37705 time= 0.01563
Epoch: 0004 train_loss= 1.81109 train_acc= 0.48000 val_loss= 0.97652 val_acc= 0.40984 time= 0.01562
Epoch: 0005 train_loss= 1.07628 train_acc= 0.47091 val_loss= 0.99723 val_acc= 0.49180 time= 0.01563
Epoch: 0006 train_loss= 0.96097 train_acc= 0.45818 val_loss= 1.06821 val_acc= 0.50820 time= 0.00000
Epoch: 0007 train_loss= 0.84135 train_acc= 0.47455 val_loss= 1.17785 val_acc= 0.50820 time= 0.01563
Epoch: 0008 train_loss= 0.92362 train_acc= 0.52182 val_loss= 1.23293 val_acc= 0.52459 time= 0.01563
Epoch: 0009 train_loss= 1.11104 train_acc= 0.53818 val_loss= 1.22797 val_acc= 0.52459 time= 0.01563
Epoch: 0010 train_loss= 0.98453 train_acc= 0.53273 val_loss= 1.19469 val_acc= 0.52459 time= 0.01563
Epoch: 0011 train_loss= 0.86191 train_acc= 0.52727 val_loss= 1.15018 val_acc= 0.52459 time= 0.01562
Epoch: 0012 train_loss= 0.91372 train_acc= 0.54182 val_loss= 1.09855 val_acc= 0.52459 time= 0.01563
Epoch: 0013 train_loss= 0.91419 train_acc= 0.54000 val_loss= 1.04002 val_acc= 0.52459 time= 0.00000
Epoch: 0014 train_loss= 0.89968 train_acc= 0.52364 val_loss= 0.97962 val_acc= 0.52459 time= 0.01563
Epoch: 0015 train_loss= 0.78856 train_acc= 0.52182 val_loss= 0.92462 val_acc= 0.50820 time= 0.01562
Epoch: 0016 train_loss= 0.87909 train_acc= 0.46000 val_loss= 0.88463 val_acc= 0.49180 time= 0.01563
Epoch: 0017 train_loss= 0.86172 train_acc= 0.48545 val_loss= 0.85661 val_acc= 0.49180 time= 0.01563
Epoch: 0018 train_loss= 0.71815 train_acc= 0.50000 val_loss= 0.83337 val_acc= 0.50820 time= 0.01563
Epoch: 0019 train_loss= 0.73773 train_acc= 0.51455 val_loss= 0.81102 val_acc= 0.52459 time= 0.01563
Epoch: 0020 train_loss= 0.74894 train_acc= 0.51818 val_loss= 0.79439 val_acc= 0.50820 time= 0.00000
Epoch: 0021 train_loss= 0.75253 train_acc= 0.52182 val_loss= 0.78210 val_acc= 0.52459 time= 0.01562
Epoch: 0022 train_loss= 0.75804 train_acc= 0.50727 val_loss= 0.77194 val_acc= 0.54098 time= 0.01563
Epoch: 0023 train_loss= 0.76302 train_acc= 0.53636 val_loss= 0.76444 val_acc= 0.57377 time= 0.01563
Epoch: 0024 train_loss= 0.72918 train_acc= 0.49455 val_loss= 0.75915 val_acc= 0.55738 time= 0.01563
Epoch: 0025 train_loss= 0.72597 train_acc= 0.49091 val_loss= 0.75447 val_acc= 0.57377 time= 0.01562
Epoch: 0026 train_loss= 0.73532 train_acc= 0.53273 val_loss= 0.74982 val_acc= 0.55738 time= 0.01563
Epoch: 0027 train_loss= 0.71566 train_acc= 0.52909 val_loss= 0.74602 val_acc= 0.52459 time= 0.01563
Epoch: 0028 train_loss= 0.78974 train_acc= 0.50182 val_loss= 0.74301 val_acc= 0.52459 time= 0.00000
Epoch: 0029 train_loss= 0.71671 train_acc= 0.51636 val_loss= 0.74049 val_acc= 0.50820 time= 0.01563
Epoch: 0030 train_loss= 0.72449 train_acc= 0.48727 val_loss= 0.73813 val_acc= 0.50820 time= 0.01563
Epoch: 0031 train_loss= 0.75717 train_acc= 0.52182 val_loss= 0.73553 val_acc= 0.49180 time= 0.01563
Epoch: 0032 train_loss= 0.71614 train_acc= 0.50909 val_loss= 0.73343 val_acc= 0.49180 time= 0.01563
Epoch: 0033 train_loss= 0.78476 train_acc= 0.47636 val_loss= 0.73222 val_acc= 0.49180 time= 0.01563
Epoch: 0034 train_loss= 0.71346 train_acc= 0.51636 val_loss= 0.73089 val_acc= 0.49180 time= 0.01563
Epoch: 0035 train_loss= 0.76480 train_acc= 0.49091 val_loss= 0.73091 val_acc= 0.54098 time= 0.01563
Epoch: 0036 train_loss= 0.72717 train_acc= 0.51273 val_loss= 0.73287 val_acc= 0.55738 time= 0.00000
Epoch: 0037 train_loss= 0.72554 train_acc= 0.49091 val_loss= 0.73623 val_acc= 0.50820 time= 0.01563
Epoch: 0038 train_loss= 0.70686 train_acc= 0.53455 val_loss= 0.73912 val_acc= 0.45902 time= 0.01563
Early stopping...
Optimization Finished!
Test set results: cost= 0.70276 accuracy= 0.49180 time= 0.01563 
