Epoch: 0001 train_loss= 1.40051 train_acc= 0.47143 val_loss= 0.70998 val_acc= 0.54098 time= 0.29297
Epoch: 0002 train_loss= 0.97656 train_acc= 0.48052 val_loss= 0.78379 val_acc= 0.47541 time= 0.01563
Epoch: 0003 train_loss= 1.10784 train_acc= 0.50649 val_loss= 0.83916 val_acc= 0.50820 time= 0.01563
Epoch: 0004 train_loss= 0.90051 train_acc= 0.47792 val_loss= 0.84329 val_acc= 0.50820 time= 0.00000
Epoch: 0005 train_loss= 1.25691 train_acc= 0.47273 val_loss= 0.96066 val_acc= 0.47541 time= 0.01563
Epoch: 0006 train_loss= 1.00131 train_acc= 0.52078 val_loss= 0.98596 val_acc= 0.47541 time= 0.01563
Epoch: 0007 train_loss= 1.36131 train_acc= 0.52208 val_loss= 0.92189 val_acc= 0.50820 time= 0.01563
Epoch: 0008 train_loss= 0.95525 train_acc= 0.48831 val_loss= 0.90082 val_acc= 0.52459 time= 0.01563
Epoch: 0009 train_loss= 1.06470 train_acc= 0.52078 val_loss= 0.91587 val_acc= 0.50820 time= 0.01563
Epoch: 0010 train_loss= 0.91997 train_acc= 0.50909 val_loss= 0.91492 val_acc= 0.50820 time= 0.00000
Epoch: 0011 train_loss= 0.99140 train_acc= 0.50260 val_loss= 0.88754 val_acc= 0.50820 time= 0.01563
Epoch: 0012 train_loss= 1.01351 train_acc= 0.49221 val_loss= 0.89095 val_acc= 0.50820 time= 0.01563
Epoch: 0013 train_loss= 0.77205 train_acc= 0.49351 val_loss= 0.88595 val_acc= 0.47541 time= 0.01563
Epoch: 0014 train_loss= 0.78314 train_acc= 0.51169 val_loss= 0.87504 val_acc= 0.47541 time= 0.01563
Epoch: 0015 train_loss= 0.86313 train_acc= 0.52468 val_loss= 0.85265 val_acc= 0.47541 time= 0.01563
Epoch: 0016 train_loss= 0.83364 train_acc= 0.51818 val_loss= 0.82839 val_acc= 0.47541 time= 0.01563
Epoch: 0017 train_loss= 0.80003 train_acc= 0.49870 val_loss= 0.79123 val_acc= 0.50820 time= 0.00000
Epoch: 0018 train_loss= 0.87941 train_acc= 0.51299 val_loss= 0.75212 val_acc= 0.50820 time= 0.01563
Epoch: 0019 train_loss= 1.01514 train_acc= 0.48312 val_loss= 0.74174 val_acc= 0.49180 time= 0.01562
Epoch: 0020 train_loss= 0.81574 train_acc= 0.50649 val_loss= 0.72927 val_acc= 0.47541 time= 0.01563
Epoch: 0021 train_loss= 0.73550 train_acc= 0.52208 val_loss= 0.71960 val_acc= 0.49180 time= 0.01563
Epoch: 0022 train_loss= 0.85513 train_acc= 0.48831 val_loss= 0.71548 val_acc= 0.47541 time= 0.01563
Epoch: 0023 train_loss= 0.83540 train_acc= 0.47662 val_loss= 0.71636 val_acc= 0.47541 time= 0.00000
Epoch: 0024 train_loss= 0.83592 train_acc= 0.48312 val_loss= 0.71983 val_acc= 0.45902 time= 0.01563
Epoch: 0025 train_loss= 0.77815 train_acc= 0.47532 val_loss= 0.72174 val_acc= 0.45902 time= 0.02800
Epoch: 0026 train_loss= 0.77160 train_acc= 0.51688 val_loss= 0.72093 val_acc= 0.45902 time= 0.00700
Epoch: 0027 train_loss= 0.73068 train_acc= 0.49870 val_loss= 0.72211 val_acc= 0.44262 time= 0.01567
Epoch: 0028 train_loss= 0.75283 train_acc= 0.48701 val_loss= 0.72455 val_acc= 0.49180 time= 0.01563
Epoch: 0029 train_loss= 0.75184 train_acc= 0.50779 val_loss= 0.72844 val_acc= 0.47541 time= 0.00000
Early stopping...
Optimization Finished!
Test set results: cost= 0.70896 accuracy= 0.50000 time= 0.01563 
