Epoch: 0001 train_loss= 0.69881 train_acc= 0.51636 val_loss= 0.69646 val_acc= 0.55738 time= 0.20314
Epoch: 0002 train_loss= 0.69814 train_acc= 0.51636 val_loss= 0.69557 val_acc= 0.55738 time= 0.01562
Epoch: 0003 train_loss= 0.69766 train_acc= 0.51636 val_loss= 0.69481 val_acc= 0.55738 time= 0.01563
Epoch: 0004 train_loss= 0.69720 train_acc= 0.51636 val_loss= 0.69427 val_acc= 0.55738 time= 0.01563
Epoch: 0005 train_loss= 0.69675 train_acc= 0.51636 val_loss= 0.69390 val_acc= 0.55738 time= 0.00000
Epoch: 0006 train_loss= 0.69625 train_acc= 0.51636 val_loss= 0.69354 val_acc= 0.55738 time= 0.01562
Epoch: 0007 train_loss= 0.69593 train_acc= 0.51636 val_loss= 0.69319 val_acc= 0.55738 time= 0.01563
Epoch: 0008 train_loss= 0.69557 train_acc= 0.51636 val_loss= 0.69290 val_acc= 0.55738 time= 0.01563
Epoch: 0009 train_loss= 0.69537 train_acc= 0.51636 val_loss= 0.69274 val_acc= 0.55738 time= 0.01563
Epoch: 0010 train_loss= 0.69488 train_acc= 0.51636 val_loss= 0.69251 val_acc= 0.55738 time= 0.01563
Epoch: 0011 train_loss= 0.69461 train_acc= 0.51636 val_loss= 0.69221 val_acc= 0.55738 time= 0.00000
Epoch: 0012 train_loss= 0.69443 train_acc= 0.51636 val_loss= 0.69189 val_acc= 0.55738 time= 0.01563
Epoch: 0013 train_loss= 0.69416 train_acc= 0.51636 val_loss= 0.69155 val_acc= 0.55738 time= 0.01563
Epoch: 0014 train_loss= 0.69407 train_acc= 0.51636 val_loss= 0.69127 val_acc= 0.55738 time= 0.01563
Epoch: 0015 train_loss= 0.69396 train_acc= 0.51636 val_loss= 0.69108 val_acc= 0.55738 time= 0.01563
Epoch: 0016 train_loss= 0.69377 train_acc= 0.51636 val_loss= 0.69096 val_acc= 0.55738 time= 0.01563
Epoch: 0017 train_loss= 0.69364 train_acc= 0.51636 val_loss= 0.69088 val_acc= 0.55738 time= 0.01563
Epoch: 0018 train_loss= 0.69343 train_acc= 0.51636 val_loss= 0.69084 val_acc= 0.55738 time= 0.00000
Epoch: 0019 train_loss= 0.69338 train_acc= 0.51636 val_loss= 0.69082 val_acc= 0.55738 time= 0.01562
Epoch: 0020 train_loss= 0.69334 train_acc= 0.51636 val_loss= 0.69082 val_acc= 0.55738 time= 0.01563
Epoch: 0021 train_loss= 0.69322 train_acc= 0.51636 val_loss= 0.69084 val_acc= 0.55738 time= 0.01563
Epoch: 0022 train_loss= 0.69313 train_acc= 0.51636 val_loss= 0.69082 val_acc= 0.55738 time= 0.01563
Epoch: 0023 train_loss= 0.69311 train_acc= 0.51636 val_loss= 0.69076 val_acc= 0.55738 time= 0.01563
Epoch: 0024 train_loss= 0.69309 train_acc= 0.51636 val_loss= 0.69067 val_acc= 0.55738 time= 0.01563
Epoch: 0025 train_loss= 0.69310 train_acc= 0.51636 val_loss= 0.69066 val_acc= 0.55738 time= 0.01563
Epoch: 0026 train_loss= 0.69295 train_acc= 0.51636 val_loss= 0.69060 val_acc= 0.55738 time= 0.00000
Epoch: 0027 train_loss= 0.69286 train_acc= 0.51636 val_loss= 0.69047 val_acc= 0.55738 time= 0.01563
Epoch: 0028 train_loss= 0.69292 train_acc= 0.51636 val_loss= 0.69034 val_acc= 0.55738 time= 0.01563
Epoch: 0029 train_loss= 0.69284 train_acc= 0.51636 val_loss= 0.69025 val_acc= 0.55738 time= 0.01563
Epoch: 0030 train_loss= 0.69281 train_acc= 0.51636 val_loss= 0.69013 val_acc= 0.55738 time= 0.01562
Epoch: 0031 train_loss= 0.69294 train_acc= 0.51636 val_loss= 0.69010 val_acc= 0.55738 time= 0.01563
Epoch: 0032 train_loss= 0.69289 train_acc= 0.51636 val_loss= 0.69016 val_acc= 0.55738 time= 0.01563
Epoch: 0033 train_loss= 0.69277 train_acc= 0.51636 val_loss= 0.69021 val_acc= 0.55738 time= 0.01476
Epoch: 0034 train_loss= 0.69288 train_acc= 0.51636 val_loss= 0.69032 val_acc= 0.55738 time= 0.00115
Epoch: 0035 train_loss= 0.69291 train_acc= 0.51636 val_loss= 0.69051 val_acc= 0.55738 time= 0.02336
Early stopping...
Optimization Finished!
Test set results: cost= 0.69153 accuracy= 0.54918 time= 0.00000 
