Epoch: 0001 train_loss= 1.39609 train_acc= 0.19832 val_loss= 1.39594 val_acc= 0.21429 time= 0.64067
Epoch: 0002 train_loss= 1.39377 train_acc= 0.23743 val_loss= 1.39229 val_acc= 0.21429 time= 0.01562
Epoch: 0003 train_loss= 1.39193 train_acc= 0.25000 val_loss= 1.38895 val_acc= 0.39286 time= 0.01563
Epoch: 0004 train_loss= 1.39036 train_acc= 0.29609 val_loss= 1.38595 val_acc= 0.39286 time= 0.01562
Epoch: 0005 train_loss= 1.38911 train_acc= 0.29888 val_loss= 1.38421 val_acc= 0.39286 time= 0.01563
Epoch: 0006 train_loss= 1.38794 train_acc= 0.30028 val_loss= 1.38274 val_acc= 0.39286 time= 0.01563
Epoch: 0007 train_loss= 1.38758 train_acc= 0.30307 val_loss= 1.38128 val_acc= 0.39286 time= 0.01563
Epoch: 0008 train_loss= 1.38671 train_acc= 0.30028 val_loss= 1.37978 val_acc= 0.39286 time= 0.01563
Epoch: 0009 train_loss= 1.38591 train_acc= 0.30168 val_loss= 1.37827 val_acc= 0.39286 time= 0.01563
Epoch: 0010 train_loss= 1.38456 train_acc= 0.30168 val_loss= 1.37675 val_acc= 0.39286 time= 0.01562
Epoch: 0011 train_loss= 1.38347 train_acc= 0.30168 val_loss= 1.37521 val_acc= 0.39286 time= 0.01563
Epoch: 0012 train_loss= 1.38360 train_acc= 0.30168 val_loss= 1.37368 val_acc= 0.39286 time= 0.01563
Epoch: 0013 train_loss= 1.38284 train_acc= 0.30168 val_loss= 1.37213 val_acc= 0.39286 time= 0.01563
Epoch: 0014 train_loss= 1.38262 train_acc= 0.30168 val_loss= 1.37060 val_acc= 0.39286 time= 0.01562
Epoch: 0015 train_loss= 1.38229 train_acc= 0.30168 val_loss= 1.36908 val_acc= 0.39286 time= 0.01563
Epoch: 0016 train_loss= 1.38222 train_acc= 0.30168 val_loss= 1.36762 val_acc= 0.39286 time= 0.01563
Epoch: 0017 train_loss= 1.38099 train_acc= 0.30307 val_loss= 1.36616 val_acc= 0.39286 time= 0.01562
Epoch: 0018 train_loss= 1.38126 train_acc= 0.30168 val_loss= 1.36473 val_acc= 0.39286 time= 0.01563
Epoch: 0019 train_loss= 1.38081 train_acc= 0.30168 val_loss= 1.36341 val_acc= 0.39286 time= 0.01563
Epoch: 0020 train_loss= 1.38174 train_acc= 0.30168 val_loss= 1.36222 val_acc= 0.39286 time= 0.01563
Epoch: 0021 train_loss= 1.38145 train_acc= 0.30168 val_loss= 1.36126 val_acc= 0.39286 time= 0.01563
Epoch: 0022 train_loss= 1.37998 train_acc= 0.30168 val_loss= 1.36042 val_acc= 0.39286 time= 0.01563
Epoch: 0023 train_loss= 1.38103 train_acc= 0.30168 val_loss= 1.35976 val_acc= 0.39286 time= 0.01563
Epoch: 0024 train_loss= 1.38004 train_acc= 0.30168 val_loss= 1.35928 val_acc= 0.39286 time= 0.01563
Epoch: 0025 train_loss= 1.38097 train_acc= 0.30168 val_loss= 1.35893 val_acc= 0.39286 time= 0.00000
Epoch: 0026 train_loss= 1.38120 train_acc= 0.30168 val_loss= 1.35879 val_acc= 0.39286 time= 0.01563
Epoch: 0027 train_loss= 1.38009 train_acc= 0.30168 val_loss= 1.35884 val_acc= 0.39286 time= 0.01563
Epoch: 0028 train_loss= 1.37982 train_acc= 0.30307 val_loss= 1.35903 val_acc= 0.39286 time= 0.01562
Epoch: 0029 train_loss= 1.37951 train_acc= 0.30307 val_loss= 1.35929 val_acc= 0.39286 time= 0.01563
Epoch: 0030 train_loss= 1.37920 train_acc= 0.30168 val_loss= 1.35964 val_acc= 0.39286 time= 0.01563
Epoch: 0031 train_loss= 1.37900 train_acc= 0.30307 val_loss= 1.36001 val_acc= 0.39286 time= 0.01563
Early stopping...
Optimization Finished!
Test set results: cost= 1.37753 accuracy= 0.29204 time= 0.01563 
