Epoch: 0001 train_loss= 1.80338 train_acc= 0.25698 val_loss= 1.51354 val_acc= 0.31667 time= 1.00007
Epoch: 0002 train_loss= 1.52818 train_acc= 0.23883 val_loss= 1.46979 val_acc= 0.35000 time= 0.01563
Epoch: 0003 train_loss= 1.76090 train_acc= 0.22346 val_loss= 1.44650 val_acc= 0.33333 time= 0.01562
Epoch: 0004 train_loss= 1.45965 train_acc= 0.24581 val_loss= 1.43756 val_acc= 0.33333 time= 0.03125
Epoch: 0005 train_loss= 1.65423 train_acc= 0.24721 val_loss= 1.42512 val_acc= 0.26667 time= 0.03125
Epoch: 0006 train_loss= 1.41798 train_acc= 0.22346 val_loss= 1.41295 val_acc= 0.26667 time= 0.01563
Epoch: 0007 train_loss= 1.46666 train_acc= 0.25000 val_loss= 1.40482 val_acc= 0.26667 time= 0.03125
Epoch: 0008 train_loss= 1.40545 train_acc= 0.22486 val_loss= 1.39672 val_acc= 0.25000 time= 0.01563
Epoch: 0009 train_loss= 1.44661 train_acc= 0.23184 val_loss= 1.38960 val_acc= 0.28333 time= 0.03125
Epoch: 0010 train_loss= 1.50674 train_acc= 0.25559 val_loss= 1.38627 val_acc= 0.25000 time= 0.01563
Epoch: 0011 train_loss= 1.49411 train_acc= 0.24162 val_loss= 1.38668 val_acc= 0.28333 time= 0.03125
Epoch: 0012 train_loss= 1.39193 train_acc= 0.26955 val_loss= 1.38575 val_acc= 0.30000 time= 0.01563
Epoch: 0013 train_loss= 1.52601 train_acc= 0.25279 val_loss= 1.38479 val_acc= 0.28333 time= 0.03125
Epoch: 0014 train_loss= 1.40047 train_acc= 0.24022 val_loss= 1.38384 val_acc= 0.28333 time= 0.01563
Epoch: 0015 train_loss= 1.39566 train_acc= 0.26536 val_loss= 1.38311 val_acc= 0.30000 time= 0.03125
Epoch: 0016 train_loss= 1.39912 train_acc= 0.25279 val_loss= 1.38241 val_acc= 0.30000 time= 0.01563
Epoch: 0017 train_loss= 1.38903 train_acc= 0.26257 val_loss= 1.38174 val_acc= 0.31667 time= 0.03125
Epoch: 0018 train_loss= 1.39107 train_acc= 0.28771 val_loss= 1.38108 val_acc= 0.31667 time= 0.01563
Epoch: 0019 train_loss= 1.38553 train_acc= 0.30168 val_loss= 1.38050 val_acc= 0.33333 time= 0.03125
Epoch: 0020 train_loss= 1.39001 train_acc= 0.27793 val_loss= 1.37999 val_acc= 0.36667 time= 0.03125
Epoch: 0021 train_loss= 1.38451 train_acc= 0.28212 val_loss= 1.37947 val_acc= 0.38333 time= 0.01563
Epoch: 0022 train_loss= 1.39411 train_acc= 0.26257 val_loss= 1.37892 val_acc= 0.35000 time= 0.03125
Epoch: 0023 train_loss= 1.38812 train_acc= 0.28631 val_loss= 1.37825 val_acc= 0.33333 time= 0.01563
Epoch: 0024 train_loss= 1.38198 train_acc= 0.31564 val_loss= 1.37760 val_acc= 0.31667 time= 0.03125
Epoch: 0025 train_loss= 1.38317 train_acc= 0.30028 val_loss= 1.37712 val_acc= 0.31667 time= 0.01563
Epoch: 0026 train_loss= 1.38299 train_acc= 0.30587 val_loss= 1.37682 val_acc= 0.31667 time= 0.03125
Epoch: 0027 train_loss= 1.38357 train_acc= 0.30726 val_loss= 1.37651 val_acc= 0.31667 time= 0.01563
Epoch: 0028 train_loss= 1.39392 train_acc= 0.28911 val_loss= 1.37624 val_acc= 0.31667 time= 0.03125
Epoch: 0029 train_loss= 1.38538 train_acc= 0.30307 val_loss= 1.37628 val_acc= 0.31667 time= 0.03125
Epoch: 0030 train_loss= 1.40763 train_acc= 0.30168 val_loss= 1.37660 val_acc= 0.30000 time= 0.01563
Epoch: 0031 train_loss= 1.38518 train_acc= 0.29330 val_loss= 1.37708 val_acc= 0.30000 time= 0.03125
Epoch: 0032 train_loss= 1.37520 train_acc= 0.30168 val_loss= 1.37761 val_acc= 0.30000 time= 0.03125
Early stopping...
Optimization Finished!
Test set results: cost= 1.38779 accuracy= 0.35000 time= 0.01563 
