Epoch: 0001 train_loss= 2.09590 train_acc= 0.06739 val_loss= 2.08154 val_acc= 0.10345 time= 0.26564
Epoch: 0002 train_loss= 2.08773 train_acc= 0.11590 val_loss= 2.07826 val_acc= 0.17241 time= 0.01563
Epoch: 0003 train_loss= 2.09109 train_acc= 0.11860 val_loss= 2.07605 val_acc= 0.20690 time= 0.01563
Epoch: 0004 train_loss= 2.08148 train_acc= 0.09704 val_loss= 2.07402 val_acc= 0.20690 time= 0.00000
Epoch: 0005 train_loss= 2.07686 train_acc= 0.16173 val_loss= 2.07203 val_acc= 0.20690 time= 0.01563
Epoch: 0006 train_loss= 2.07026 train_acc= 0.17251 val_loss= 2.07002 val_acc= 0.20690 time= 0.01563
Epoch: 0007 train_loss= 2.05780 train_acc= 0.17251 val_loss= 2.06782 val_acc= 0.20690 time= 0.00000
Epoch: 0008 train_loss= 2.06062 train_acc= 0.17790 val_loss= 2.06557 val_acc= 0.20690 time= 0.01563
Epoch: 0009 train_loss= 2.05907 train_acc= 0.17251 val_loss= 2.06313 val_acc= 0.20690 time= 0.01563
Epoch: 0010 train_loss= 2.05904 train_acc= 0.17251 val_loss= 2.06058 val_acc= 0.20690 time= 0.00000
Epoch: 0011 train_loss= 2.05785 train_acc= 0.17251 val_loss= 2.05817 val_acc= 0.20690 time= 0.01563
Epoch: 0012 train_loss= 2.05490 train_acc= 0.17251 val_loss= 2.05527 val_acc= 0.20690 time= 0.01563
Epoch: 0013 train_loss= 2.05299 train_acc= 0.17251 val_loss= 2.05215 val_acc= 0.20690 time= 0.00000
Epoch: 0014 train_loss= 2.05141 train_acc= 0.17520 val_loss= 2.04916 val_acc= 0.20690 time= 0.01563
Epoch: 0015 train_loss= 2.05226 train_acc= 0.17790 val_loss= 2.04670 val_acc= 0.20690 time= 0.01563
Epoch: 0016 train_loss= 2.05589 train_acc= 0.18598 val_loss= 2.04436 val_acc= 0.20690 time= 0.00000
Epoch: 0017 train_loss= 2.05130 train_acc= 0.17251 val_loss= 2.04200 val_acc= 0.20690 time= 0.01563
Epoch: 0018 train_loss= 2.04610 train_acc= 0.17251 val_loss= 2.03964 val_acc= 0.20690 time= 0.01563
Epoch: 0019 train_loss= 2.04565 train_acc= 0.16981 val_loss= 2.03727 val_acc= 0.20690 time= 0.00000
Epoch: 0020 train_loss= 2.05169 train_acc= 0.17251 val_loss= 2.03519 val_acc= 0.20690 time= 0.01563
Epoch: 0021 train_loss= 2.04991 train_acc= 0.17520 val_loss= 2.03304 val_acc= 0.20690 time= 0.01563
Epoch: 0022 train_loss= 2.04590 train_acc= 0.17790 val_loss= 2.03160 val_acc= 0.20690 time= 0.00000
Epoch: 0023 train_loss= 2.04433 train_acc= 0.17251 val_loss= 2.03058 val_acc= 0.20690 time= 0.01563
Epoch: 0024 train_loss= 2.04480 train_acc= 0.17251 val_loss= 2.02955 val_acc= 0.20690 time= 0.01563
Epoch: 0025 train_loss= 2.04661 train_acc= 0.17251 val_loss= 2.02926 val_acc= 0.20690 time= 0.00000
Epoch: 0026 train_loss= 2.04281 train_acc= 0.17520 val_loss= 2.02904 val_acc= 0.20690 time= 0.01563
Epoch: 0027 train_loss= 2.05103 train_acc= 0.16173 val_loss= 2.02960 val_acc= 0.20690 time= 0.01563
Epoch: 0028 train_loss= 2.04900 train_acc= 0.18059 val_loss= 2.03009 val_acc= 0.20690 time= 0.01563
Epoch: 0029 train_loss= 2.04009 train_acc= 0.19407 val_loss= 2.03079 val_acc= 0.20690 time= 0.00000
Epoch: 0030 train_loss= 2.04188 train_acc= 0.18329 val_loss= 2.03164 val_acc= 0.20690 time= 0.01563
Early stopping...
Optimization Finished!
Test set results: cost= 2.09435 accuracy= 0.18644 time= 0.00000 
