Epoch: 0001 train_loss= 0.69778 train_acc= 0.52727 val_loss= 0.69412 val_acc= 0.59016 time= 0.80716
Epoch: 0002 train_loss= 0.69834 train_acc= 0.51818 val_loss= 0.69399 val_acc= 0.59016 time= 0.01562
Epoch: 0003 train_loss= 0.69828 train_acc= 0.51688 val_loss= 0.69408 val_acc= 0.59016 time= 0.00000
Epoch: 0004 train_loss= 0.69833 train_acc= 0.51169 val_loss= 0.69382 val_acc= 0.59016 time= 0.00000
Epoch: 0005 train_loss= 0.69846 train_acc= 0.51169 val_loss= 0.69364 val_acc= 0.59016 time= 0.01563
Epoch: 0006 train_loss= 0.69495 train_acc= 0.53117 val_loss= 0.69313 val_acc= 0.59016 time= 0.01227
Epoch: 0007 train_loss= 0.69552 train_acc= 0.51818 val_loss= 0.69263 val_acc= 0.59016 time= 0.00700
Epoch: 0008 train_loss= 0.69486 train_acc= 0.52078 val_loss= 0.69205 val_acc= 0.59016 time= 0.00600
Epoch: 0009 train_loss= 0.69515 train_acc= 0.52727 val_loss= 0.69152 val_acc= 0.59016 time= 0.00700
Epoch: 0010 train_loss= 0.69496 train_acc= 0.51429 val_loss= 0.69108 val_acc= 0.59016 time= 0.00600
Epoch: 0011 train_loss= 0.69434 train_acc= 0.51818 val_loss= 0.69072 val_acc= 0.59016 time= 0.00600
Epoch: 0012 train_loss= 0.69357 train_acc= 0.51169 val_loss= 0.69043 val_acc= 0.59016 time= 0.00606
Epoch: 0013 train_loss= 0.69220 train_acc= 0.52468 val_loss= 0.69028 val_acc= 0.59016 time= 0.00000
Epoch: 0014 train_loss= 0.69359 train_acc= 0.51818 val_loss= 0.69025 val_acc= 0.59016 time= 0.00000
Epoch: 0015 train_loss= 0.69262 train_acc= 0.52078 val_loss= 0.69017 val_acc= 0.59016 time= 0.01562
Epoch: 0016 train_loss= 0.69308 train_acc= 0.52078 val_loss= 0.69014 val_acc= 0.59016 time= 0.00000
Epoch: 0017 train_loss= 0.69275 train_acc= 0.51818 val_loss= 0.69010 val_acc= 0.59016 time= 0.01563
Epoch: 0018 train_loss= 0.69138 train_acc= 0.54026 val_loss= 0.69001 val_acc= 0.59016 time= 0.00000
Epoch: 0019 train_loss= 0.69235 train_acc= 0.52727 val_loss= 0.68993 val_acc= 0.57377 time= 0.00000
Epoch: 0020 train_loss= 0.69216 train_acc= 0.52727 val_loss= 0.68990 val_acc= 0.57377 time= 0.01563
Epoch: 0021 train_loss= 0.69186 train_acc= 0.53506 val_loss= 0.68971 val_acc= 0.57377 time= 0.00000
Epoch: 0022 train_loss= 0.69242 train_acc= 0.51429 val_loss= 0.68947 val_acc= 0.57377 time= 0.01563
Epoch: 0023 train_loss= 0.69175 train_acc= 0.52727 val_loss= 0.68935 val_acc= 0.57377 time= 0.00000
Epoch: 0024 train_loss= 0.69245 train_acc= 0.51818 val_loss= 0.68931 val_acc= 0.57377 time= 0.00000
Epoch: 0025 train_loss= 0.69259 train_acc= 0.52468 val_loss= 0.68931 val_acc= 0.57377 time= 0.01563
Epoch: 0026 train_loss= 0.69108 train_acc= 0.52078 val_loss= 0.68928 val_acc= 0.57377 time= 0.00000
Epoch: 0027 train_loss= 0.69196 train_acc= 0.52987 val_loss= 0.68928 val_acc= 0.57377 time= 0.00000
Epoch: 0028 train_loss= 0.69233 train_acc= 0.53636 val_loss= 0.68933 val_acc= 0.57377 time= 0.01563
Epoch: 0029 train_loss= 0.69061 train_acc= 0.52338 val_loss= 0.68938 val_acc= 0.57377 time= 0.00000
Epoch: 0030 train_loss= 0.68981 train_acc= 0.52208 val_loss= 0.68949 val_acc= 0.59016 time= 0.01563
Early stopping...
Optimization Finished!
Test set results: cost= 0.69278 accuracy= 0.52459 time= 0.00000 
