Epoch: 0001 train_loss= 0.70113 train_acc= 0.50545 val_loss= 0.69814 val_acc= 0.49180 time= 0.24509
Epoch: 0002 train_loss= 0.69816 train_acc= 0.51818 val_loss= 0.69602 val_acc= 0.49180 time= 0.00000
Epoch: 0003 train_loss= 0.69610 train_acc= 0.50364 val_loss= 0.69458 val_acc= 0.50820 time= 0.01563
Epoch: 0004 train_loss= 0.69456 train_acc= 0.55455 val_loss= 0.69370 val_acc= 0.50820 time= 0.01563
Epoch: 0005 train_loss= 0.69369 train_acc= 0.53818 val_loss= 0.69324 val_acc= 0.52459 time= 0.00000
Epoch: 0006 train_loss= 0.69309 train_acc= 0.54000 val_loss= 0.69305 val_acc= 0.52459 time= 0.01563
Epoch: 0007 train_loss= 0.69286 train_acc= 0.56727 val_loss= 0.69305 val_acc= 0.54098 time= 0.01563
Epoch: 0008 train_loss= 0.69263 train_acc= 0.65273 val_loss= 0.69312 val_acc= 0.54098 time= 0.00000
Epoch: 0009 train_loss= 0.69281 train_acc= 0.55455 val_loss= 0.69316 val_acc= 0.55738 time= 0.01563
Epoch: 0010 train_loss= 0.69280 train_acc= 0.60727 val_loss= 0.69318 val_acc= 0.57377 time= 0.01563
Epoch: 0011 train_loss= 0.69278 train_acc= 0.63636 val_loss= 0.69316 val_acc= 0.60656 time= 0.00000
Epoch: 0012 train_loss= 0.69246 train_acc= 0.58000 val_loss= 0.69299 val_acc= 0.62295 time= 0.01563
Epoch: 0013 train_loss= 0.69234 train_acc= 0.64909 val_loss= 0.69281 val_acc= 0.59016 time= 0.01563
Epoch: 0014 train_loss= 0.69237 train_acc= 0.63273 val_loss= 0.69262 val_acc= 0.60656 time= 0.00000
Epoch: 0015 train_loss= 0.69207 train_acc= 0.65455 val_loss= 0.69241 val_acc= 0.60656 time= 0.01563
Epoch: 0016 train_loss= 0.69134 train_acc= 0.64364 val_loss= 0.69221 val_acc= 0.60656 time= 0.01563
Epoch: 0017 train_loss= 0.69117 train_acc= 0.65273 val_loss= 0.69216 val_acc= 0.59016 time= 0.01563
Epoch: 0018 train_loss= 0.69117 train_acc= 0.65818 val_loss= 0.69217 val_acc= 0.55738 time= 0.00000
Epoch: 0019 train_loss= 0.69131 train_acc= 0.59818 val_loss= 0.69200 val_acc= 0.55738 time= 0.01563
Epoch: 0020 train_loss= 0.69099 train_acc= 0.58364 val_loss= 0.69165 val_acc= 0.62295 time= 0.01563
Epoch: 0021 train_loss= 0.69043 train_acc= 0.65273 val_loss= 0.69128 val_acc= 0.59016 time= 0.01291
Epoch: 0022 train_loss= 0.69081 train_acc= 0.62364 val_loss= 0.69102 val_acc= 0.55738 time= 0.00805
Epoch: 0023 train_loss= 0.69088 train_acc= 0.61455 val_loss= 0.69092 val_acc= 0.57377 time= 0.00000
Epoch: 0024 train_loss= 0.69050 train_acc= 0.64909 val_loss= 0.69081 val_acc= 0.57377 time= 0.01563
Epoch: 0025 train_loss= 0.69041 train_acc= 0.58909 val_loss= 0.69048 val_acc= 0.60656 time= 0.01563
Epoch: 0026 train_loss= 0.68983 train_acc= 0.59818 val_loss= 0.69039 val_acc= 0.59016 time= 0.00000
Epoch: 0027 train_loss= 0.68959 train_acc= 0.68727 val_loss= 0.69037 val_acc= 0.57377 time= 0.01563
Epoch: 0028 train_loss= 0.68949 train_acc= 0.62182 val_loss= 0.69046 val_acc= 0.60656 time= 0.01563
Epoch: 0029 train_loss= 0.68922 train_acc= 0.62727 val_loss= 0.69073 val_acc= 0.57377 time= 0.00000
Epoch: 0030 train_loss= 0.68901 train_acc= 0.64000 val_loss= 0.69091 val_acc= 0.54098 time= 0.01563
Early stopping...
Optimization Finished!
Test set results: cost= 0.69650 accuracy= 0.44262 time= 0.00000 
