Abstract: For the optimization problem of deep learning, it is important to formulate a optimization method that can improve the convergence rate without sacrificing generalization ability. This paper proposes a layer-wise based Adabelief optimization algorithm to solve the deep learning optimization problems more efficiently. In the proposed algorithm, each layer of the deep neural network is set different learning rate appropriately in order to achieve a faster convergence rate. We also give the theorems that can guarantee the convergence property of Layer-wised AdaBelief method. Finally, we evaluate the effectiveness and efficiency of the proposed algorithm on experimental examples. Experimental results show that the converges speed of the layer-wised AdaBelief algorithm is the fastest compared with the mainstream algorithms. Besides, the new algorithm also maintaining an excellent convergence result in all numerical examples.
0 Replies
Loading