Epoch: 0001 train_loss= 0.70107 train_acc= 0.49091 val_loss= 0.69765 val_acc= 0.60656 time= 0.77559
Epoch: 0002 train_loss= 0.69770 train_acc= 0.54545 val_loss= 0.69455 val_acc= 0.62295 time= 0.01326
Epoch: 0003 train_loss= 0.69548 train_acc= 0.53377 val_loss= 0.69154 val_acc= 0.62295 time= 0.01200
Epoch: 0004 train_loss= 0.69323 train_acc= 0.54286 val_loss= 0.68887 val_acc= 0.62295 time= 0.01030
Epoch: 0005 train_loss= 0.69173 train_acc= 0.54286 val_loss= 0.68673 val_acc= 0.62295 time= 0.01108
Epoch: 0006 train_loss= 0.69011 train_acc= 0.54805 val_loss= 0.68513 val_acc= 0.62295 time= 0.01007
Epoch: 0007 train_loss= 0.68903 train_acc= 0.54675 val_loss= 0.68397 val_acc= 0.62295 time= 0.01500
Epoch: 0008 train_loss= 0.68849 train_acc= 0.54545 val_loss= 0.68317 val_acc= 0.62295 time= 0.01200
Epoch: 0009 train_loss= 0.68802 train_acc= 0.55584 val_loss= 0.68261 val_acc= 0.62295 time= 0.01129
Epoch: 0010 train_loss= 0.68788 train_acc= 0.55974 val_loss= 0.68208 val_acc= 0.62295 time= 0.01100
Epoch: 0011 train_loss= 0.68712 train_acc= 0.55714 val_loss= 0.68164 val_acc= 0.62295 time= 0.01200
Epoch: 0012 train_loss= 0.68563 train_acc= 0.57922 val_loss= 0.68077 val_acc= 0.62295 time= 0.01000
Epoch: 0013 train_loss= 0.68657 train_acc= 0.60130 val_loss= 0.67951 val_acc= 0.62295 time= 0.01000
Epoch: 0014 train_loss= 0.68422 train_acc= 0.57792 val_loss= 0.67846 val_acc= 0.62295 time= 0.01115
Epoch: 0015 train_loss= 0.68409 train_acc= 0.56753 val_loss= 0.67779 val_acc= 0.62295 time= 0.01000
Epoch: 0016 train_loss= 0.68388 train_acc= 0.57013 val_loss= 0.67720 val_acc= 0.62295 time= 0.01022
Epoch: 0017 train_loss= 0.68157 train_acc= 0.60779 val_loss= 0.67648 val_acc= 0.63934 time= 0.01007
Epoch: 0018 train_loss= 0.68107 train_acc= 0.59610 val_loss= 0.67575 val_acc= 0.63934 time= 0.00000
Epoch: 0019 train_loss= 0.67948 train_acc= 0.65455 val_loss= 0.67459 val_acc= 0.63934 time= 0.01562
Epoch: 0020 train_loss= 0.68063 train_acc= 0.63766 val_loss= 0.67337 val_acc= 0.63934 time= 0.00000
Epoch: 0021 train_loss= 0.67738 train_acc= 0.65325 val_loss= 0.67190 val_acc= 0.63934 time= 0.00000
Epoch: 0022 train_loss= 0.67928 train_acc= 0.63377 val_loss= 0.67004 val_acc= 0.63934 time= 0.01563
Epoch: 0023 train_loss= 0.67751 train_acc= 0.62727 val_loss= 0.66883 val_acc= 0.63934 time= 0.01435
Epoch: 0024 train_loss= 0.67748 train_acc= 0.56883 val_loss= 0.66901 val_acc= 0.63934 time= 0.01100
Epoch: 0025 train_loss= 0.67600 train_acc= 0.63247 val_loss= 0.66929 val_acc= 0.63934 time= 0.00958
Epoch: 0026 train_loss= 0.67828 train_acc= 0.61948 val_loss= 0.66995 val_acc= 0.70492 time= 0.00900
Epoch: 0027 train_loss= 0.67464 train_acc= 0.64935 val_loss= 0.67091 val_acc= 0.68852 time= 0.00000
Epoch: 0028 train_loss= 0.67206 train_acc= 0.68831 val_loss= 0.67059 val_acc= 0.68852 time= 0.01567
Epoch: 0029 train_loss= 0.67294 train_acc= 0.70000 val_loss= 0.66886 val_acc= 0.67213 time= 0.01563
Epoch: 0030 train_loss= 0.67441 train_acc= 0.66883 val_loss= 0.66668 val_acc= 0.70492 time= 0.00000
Epoch: 0031 train_loss= 0.67354 train_acc= 0.71039 val_loss= 0.66314 val_acc= 0.67213 time= 0.01563
Epoch: 0032 train_loss= 0.67128 train_acc= 0.62468 val_loss= 0.66108 val_acc= 0.63934 time= 0.01563
Epoch: 0033 train_loss= 0.67022 train_acc= 0.62857 val_loss= 0.65991 val_acc= 0.63934 time= 0.00000
Epoch: 0034 train_loss= 0.66967 train_acc= 0.62468 val_loss= 0.65971 val_acc= 0.67213 time= 0.01563
Epoch: 0035 train_loss= 0.66563 train_acc= 0.64416 val_loss= 0.66068 val_acc= 0.70492 time= 0.01563
Epoch: 0036 train_loss= 0.66615 train_acc= 0.66104 val_loss= 0.66224 val_acc= 0.68852 time= 0.00000
Epoch: 0037 train_loss= 0.66597 train_acc= 0.68831 val_loss= 0.66437 val_acc= 0.70492 time= 0.01563
Epoch: 0038 train_loss= 0.66620 train_acc= 0.68442 val_loss= 0.66596 val_acc= 0.75410 time= 0.01563
Early stopping...
Optimization Finished!
Test set results: cost= 0.68893 accuracy= 0.64754 time= 0.00000 
