Epoch: 0001 train_loss= 0.99260 train_acc= 0.53818 val_loss= 0.80099 val_acc= 0.55738 time= 0.41729
Epoch: 0002 train_loss= 1.52050 train_acc= 0.49091 val_loss= 0.77656 val_acc= 0.54098 time= 0.01563
Epoch: 0003 train_loss= 0.92603 train_acc= 0.54727 val_loss= 0.77127 val_acc= 0.54098 time= 0.01563
Epoch: 0004 train_loss= 0.93307 train_acc= 0.53455 val_loss= 0.75537 val_acc= 0.52459 time= 0.01562
Epoch: 0005 train_loss= 0.95561 train_acc= 0.49273 val_loss= 0.73711 val_acc= 0.49180 time= 0.01563
Epoch: 0006 train_loss= 0.76698 train_acc= 0.52909 val_loss= 0.72718 val_acc= 0.49180 time= 0.01563
Epoch: 0007 train_loss= 1.77439 train_acc= 0.49273 val_loss= 0.71735 val_acc= 0.52459 time= 0.01563
Epoch: 0008 train_loss= 0.76101 train_acc= 0.52727 val_loss= 0.71372 val_acc= 0.54098 time= 0.01563
Epoch: 0009 train_loss= 0.83809 train_acc= 0.51273 val_loss= 0.71252 val_acc= 0.54098 time= 0.01563
Epoch: 0010 train_loss= 0.89768 train_acc= 0.47091 val_loss= 0.71158 val_acc= 0.50820 time= 0.01563
Epoch: 0011 train_loss= 0.84620 train_acc= 0.53636 val_loss= 0.70941 val_acc= 0.49180 time= 0.00000
Epoch: 0012 train_loss= 0.74246 train_acc= 0.50182 val_loss= 0.70700 val_acc= 0.47541 time= 0.01563
Epoch: 0013 train_loss= 1.01858 train_acc= 0.53091 val_loss= 0.70428 val_acc= 0.49180 time= 0.01563
Epoch: 0014 train_loss= 0.83007 train_acc= 0.53455 val_loss= 0.70289 val_acc= 0.50820 time= 0.01563
Epoch: 0015 train_loss= 0.79397 train_acc= 0.53273 val_loss= 0.70173 val_acc= 0.54098 time= 0.01563
Epoch: 0016 train_loss= 0.75043 train_acc= 0.50727 val_loss= 0.70068 val_acc= 0.54098 time= 0.01563
Epoch: 0017 train_loss= 0.80036 train_acc= 0.50182 val_loss= 0.69994 val_acc= 0.54098 time= 0.00481
Epoch: 0018 train_loss= 0.72019 train_acc= 0.55818 val_loss= 0.69946 val_acc= 0.49180 time= 0.01101
Epoch: 0019 train_loss= 0.77444 train_acc= 0.54909 val_loss= 0.69907 val_acc= 0.49180 time= 0.01563
Epoch: 0020 train_loss= 0.86691 train_acc= 0.53818 val_loss= 0.69884 val_acc= 0.52459 time= 0.01563
Epoch: 0021 train_loss= 0.74426 train_acc= 0.47818 val_loss= 0.69900 val_acc= 0.55738 time= 0.01563
Epoch: 0022 train_loss= 0.73840 train_acc= 0.52182 val_loss= 0.69909 val_acc= 0.59016 time= 0.00000
Epoch: 0023 train_loss= 0.89806 train_acc= 0.44545 val_loss= 0.69865 val_acc= 0.55738 time= 0.01563
Epoch: 0024 train_loss= 0.71336 train_acc= 0.52182 val_loss= 0.69825 val_acc= 0.52459 time= 0.01562
Epoch: 0025 train_loss= 0.74014 train_acc= 0.53818 val_loss= 0.69760 val_acc= 0.52459 time= 0.01563
Epoch: 0026 train_loss= 0.73134 train_acc= 0.48727 val_loss= 0.69692 val_acc= 0.50820 time= 0.00000
Epoch: 0027 train_loss= 0.81031 train_acc= 0.50545 val_loss= 0.69636 val_acc= 0.52459 time= 0.01563
Epoch: 0028 train_loss= 0.81420 train_acc= 0.48000 val_loss= 0.69585 val_acc= 0.52459 time= 0.01563
Epoch: 0029 train_loss= 0.72748 train_acc= 0.50182 val_loss= 0.69573 val_acc= 0.52459 time= 0.01563
Epoch: 0030 train_loss= 0.78263 train_acc= 0.53091 val_loss= 0.69601 val_acc= 0.52459 time= 0.01563
Epoch: 0031 train_loss= 0.69545 train_acc= 0.55455 val_loss= 0.69636 val_acc= 0.52459 time= 0.00000
Epoch: 0032 train_loss= 0.78732 train_acc= 0.52727 val_loss= 0.69639 val_acc= 0.52459 time= 0.01563
Epoch: 0033 train_loss= 0.83988 train_acc= 0.54545 val_loss= 0.69621 val_acc= 0.52459 time= 0.01563
Epoch: 0034 train_loss= 0.70561 train_acc= 0.53273 val_loss= 0.69604 val_acc= 0.52459 time= 0.01563
Epoch: 0035 train_loss= 0.71504 train_acc= 0.52545 val_loss= 0.69601 val_acc= 0.54098 time= 0.01562
Epoch: 0036 train_loss= 0.70589 train_acc= 0.49455 val_loss= 0.69608 val_acc= 0.50820 time= 0.01563
Epoch: 0037 train_loss= 0.71977 train_acc= 0.54000 val_loss= 0.69617 val_acc= 0.50820 time= 0.00000
Early stopping...
Optimization Finished!
Test set results: cost= 0.72575 accuracy= 0.54098 time= 0.01563 
