Epoch: 0001 train_loss= 1.38477 train_acc= 0.50000 val_loss= 1.12675 val_acc= 0.44262 time= 0.06250
Epoch: 0002 train_loss= 0.89503 train_acc= 0.51212 val_loss= 1.14728 val_acc= 0.44262 time= 0.01563
Epoch: 0003 train_loss= 1.17535 train_acc= 0.51212 val_loss= 1.18191 val_acc= 0.44262 time= 0.01562
Epoch: 0004 train_loss= 0.95855 train_acc= 0.52424 val_loss= 1.26253 val_acc= 0.44262 time= 0.01563
Epoch: 0005 train_loss= 0.85340 train_acc= 0.49394 val_loss= 1.33035 val_acc= 0.37705 time= 0.01563
Epoch: 0006 train_loss= 0.80932 train_acc= 0.51818 val_loss= 1.38847 val_acc= 0.40984 time= 0.00000
Epoch: 0007 train_loss= 0.82885 train_acc= 0.50606 val_loss= 1.43453 val_acc= 0.42623 time= 0.01563
Epoch: 0008 train_loss= 0.88043 train_acc= 0.54545 val_loss= 1.39648 val_acc= 0.47541 time= 0.01563
Epoch: 0009 train_loss= 0.76241 train_acc= 0.50909 val_loss= 1.34667 val_acc= 0.50820 time= 0.01563
Epoch: 0010 train_loss= 0.82851 train_acc= 0.50303 val_loss= 1.27247 val_acc= 0.50820 time= 0.00000
Epoch: 0011 train_loss= 0.83775 train_acc= 0.52121 val_loss= 1.18725 val_acc= 0.50820 time= 0.01563
Epoch: 0012 train_loss= 0.77666 train_acc= 0.54242 val_loss= 1.10108 val_acc= 0.50820 time= 0.02427
Epoch: 0013 train_loss= 0.79198 train_acc= 0.48485 val_loss= 1.04071 val_acc= 0.50820 time= 0.01300
Epoch: 0014 train_loss= 0.73918 train_acc= 0.47273 val_loss= 0.98443 val_acc= 0.50820 time= 0.00000
Epoch: 0015 train_loss= 0.83189 train_acc= 0.48182 val_loss= 0.93554 val_acc= 0.50820 time= 0.01568
Epoch: 0016 train_loss= 0.77713 train_acc= 0.49091 val_loss= 0.88899 val_acc= 0.50820 time= 0.01563
Epoch: 0017 train_loss= 0.74797 train_acc= 0.49697 val_loss= 0.85211 val_acc= 0.45902 time= 0.01563
Epoch: 0018 train_loss= 0.76774 train_acc= 0.53030 val_loss= 0.81794 val_acc= 0.42623 time= 0.02118
Epoch: 0019 train_loss= 0.72496 train_acc= 0.50909 val_loss= 0.79518 val_acc= 0.44262 time= 0.01105
Epoch: 0020 train_loss= 0.71604 train_acc= 0.50909 val_loss= 0.77697 val_acc= 0.39344 time= 0.00000
Epoch: 0021 train_loss= 0.71073 train_acc= 0.53030 val_loss= 0.76255 val_acc= 0.49180 time= 0.01563
Epoch: 0022 train_loss= 0.71432 train_acc= 0.52424 val_loss= 0.75227 val_acc= 0.39344 time= 0.01563
Epoch: 0023 train_loss= 0.73471 train_acc= 0.53030 val_loss= 0.74513 val_acc= 0.37705 time= 0.01562
Epoch: 0024 train_loss= 0.79039 train_acc= 0.50000 val_loss= 0.73879 val_acc= 0.39344 time= 0.01563
Epoch: 0025 train_loss= 0.70405 train_acc= 0.54242 val_loss= 0.73401 val_acc= 0.39344 time= 0.01563
Epoch: 0026 train_loss= 0.85488 train_acc= 0.53333 val_loss= 0.72992 val_acc= 0.37705 time= 0.01563
Epoch: 0027 train_loss= 0.72474 train_acc= 0.47273 val_loss= 0.72737 val_acc= 0.39344 time= 0.01563
Epoch: 0028 train_loss= 0.72105 train_acc= 0.53030 val_loss= 0.72519 val_acc= 0.40984 time= 0.00000
Epoch: 0029 train_loss= 0.73224 train_acc= 0.53333 val_loss= 0.72357 val_acc= 0.40984 time= 0.01562
Epoch: 0030 train_loss= 0.72198 train_acc= 0.57576 val_loss= 0.72183 val_acc= 0.37705 time= 0.01563
Epoch: 0031 train_loss= 0.73274 train_acc= 0.48788 val_loss= 0.72071 val_acc= 0.42623 time= 0.01562
Epoch: 0032 train_loss= 0.76238 train_acc= 0.50606 val_loss= 0.72055 val_acc= 0.45902 time= 0.01563
Epoch: 0033 train_loss= 0.70956 train_acc= 0.52121 val_loss= 0.72055 val_acc= 0.44262 time= 0.01563
Epoch: 0034 train_loss= 0.71065 train_acc= 0.51515 val_loss= 0.72089 val_acc= 0.45902 time= 0.00000
Epoch: 0035 train_loss= 0.72623 train_acc= 0.49394 val_loss= 0.72101 val_acc= 0.45902 time= 0.01563
Epoch: 0036 train_loss= 0.73030 train_acc= 0.49697 val_loss= 0.72144 val_acc= 0.47541 time= 0.01563
Epoch: 0037 train_loss= 0.70987 train_acc= 0.48788 val_loss= 0.72211 val_acc= 0.45902 time= 0.01563
Epoch: 0038 train_loss= 0.73344 train_acc= 0.51818 val_loss= 0.72207 val_acc= 0.45902 time= 0.01563
Early stopping...
Optimization Finished!
Test set results: cost= 0.72979 accuracy= 0.48361 time= 0.00000 
