Epoch: 0001 train_loss= 2.08517 train_acc= 0.15472 val_loss= 2.08613 val_acc= 0.03448 time= 0.30064
Epoch: 0002 train_loss= 2.08363 train_acc= 0.14340 val_loss= 2.08392 val_acc= 0.03448 time= 0.00000
Epoch: 0003 train_loss= 2.08188 train_acc= 0.15849 val_loss= 2.08158 val_acc= 0.03448 time= 0.01563
Epoch: 0004 train_loss= 2.08026 train_acc= 0.15849 val_loss= 2.07935 val_acc= 0.03448 time= 0.01563
Epoch: 0005 train_loss= 2.07837 train_acc= 0.16604 val_loss= 2.07745 val_acc= 0.03448 time= 0.00000
Epoch: 0006 train_loss= 2.07775 train_acc= 0.14717 val_loss= 2.07562 val_acc= 0.03448 time= 0.01563
Epoch: 0007 train_loss= 2.07541 train_acc= 0.16226 val_loss= 2.07389 val_acc= 0.24138 time= 0.00000
Epoch: 0008 train_loss= 2.07669 train_acc= 0.18113 val_loss= 2.07225 val_acc= 0.24138 time= 0.01563
Epoch: 0009 train_loss= 2.07260 train_acc= 0.15472 val_loss= 2.07076 val_acc= 0.24138 time= 0.00000
Epoch: 0010 train_loss= 2.07309 train_acc= 0.14717 val_loss= 2.06938 val_acc= 0.24138 time= 0.01563
Epoch: 0011 train_loss= 2.07236 train_acc= 0.18491 val_loss= 2.06811 val_acc= 0.24138 time= 0.00000
Epoch: 0012 train_loss= 2.07290 train_acc= 0.16981 val_loss= 2.06689 val_acc= 0.24138 time= 0.01563
Epoch: 0013 train_loss= 2.07247 train_acc= 0.14717 val_loss= 2.06566 val_acc= 0.24138 time= 0.00000
Epoch: 0014 train_loss= 2.07001 train_acc= 0.16226 val_loss= 2.06446 val_acc= 0.24138 time= 0.01563
Epoch: 0015 train_loss= 2.07019 train_acc= 0.16981 val_loss= 2.06322 val_acc= 0.24138 time= 0.00000
Epoch: 0016 train_loss= 2.07072 train_acc= 0.16226 val_loss= 2.06192 val_acc= 0.24138 time= 0.01563
Epoch: 0017 train_loss= 2.06980 train_acc= 0.15472 val_loss= 2.06077 val_acc= 0.24138 time= 0.01563
Epoch: 0018 train_loss= 2.07020 train_acc= 0.16226 val_loss= 2.05965 val_acc= 0.24138 time= 0.00000
Epoch: 0019 train_loss= 2.07002 train_acc= 0.16226 val_loss= 2.05862 val_acc= 0.24138 time= 0.01563
Epoch: 0020 train_loss= 2.06785 train_acc= 0.15849 val_loss= 2.05760 val_acc= 0.24138 time= 0.00000
Epoch: 0021 train_loss= 2.07029 train_acc= 0.16226 val_loss= 2.05673 val_acc= 0.24138 time= 0.01563
Epoch: 0022 train_loss= 2.06905 train_acc= 0.16226 val_loss= 2.05605 val_acc= 0.24138 time= 0.00000
Epoch: 0023 train_loss= 2.06929 train_acc= 0.16226 val_loss= 2.05549 val_acc= 0.24138 time= 0.01563
Epoch: 0024 train_loss= 2.06784 train_acc= 0.16226 val_loss= 2.05513 val_acc= 0.24138 time= 0.01562
Epoch: 0025 train_loss= 2.06795 train_acc= 0.15849 val_loss= 2.05486 val_acc= 0.24138 time= 0.00000
Epoch: 0026 train_loss= 2.06858 train_acc= 0.16604 val_loss= 2.05470 val_acc= 0.24138 time= 0.01563
Epoch: 0027 train_loss= 2.06773 train_acc= 0.16226 val_loss= 2.05472 val_acc= 0.24138 time= 0.00000
Epoch: 0028 train_loss= 2.06768 train_acc= 0.16226 val_loss= 2.05481 val_acc= 0.24138 time= 0.01563
Epoch: 0029 train_loss= 2.06891 train_acc= 0.15472 val_loss= 2.05490 val_acc= 0.24138 time= 0.01563
Epoch: 0030 train_loss= 2.06834 train_acc= 0.16604 val_loss= 2.05500 val_acc= 0.24138 time= 0.00000
Epoch: 0031 train_loss= 2.06755 train_acc= 0.15849 val_loss= 2.05509 val_acc= 0.24138 time= 0.01563
Epoch: 0032 train_loss= 2.06766 train_acc= 0.16226 val_loss= 2.05527 val_acc= 0.24138 time= 0.00000
Early stopping...
Optimization Finished!
Test set results: cost= 2.06095 accuracy= 0.11864 time= 0.00000 
