Abstract: Traditional deep neural networks (DNNs) often suffer from a time-consuming training process, which is restricted by accumulation of excessive network layers and a large amount of parameters. More neural units are required to be stacked to achieve desirable performance. Specifically, when dealing with large-scale datasets, a single DNN can hardly obtain the best performance on the available limited computing resources. To address the issues above, in this paper, a novel Parallel Hierarchical Ensemble Deep Neural Network (PH-E-DNN) is proposed to improve accuracy and efficiency of the deep network. Firstly, the fuzzy C-means algorithm (FCM) is adopted so that the large-scale dataset is separated into several small data partitions. As a benefit of the fuzzy partitioning of the FCM, several sub-models can be obtained through learning their respective data partitions and isolating them from the others. Secondly, the prediction results of each sub-model in the current level are used as the discriminative knowledge appended to original regional subsets, and predictions from each level symmetrically augment inputs for the next level. In the PH-E-DNN architecture, predictions from each level symmetrically augment inputs for the next level, creating a symmetrical flow of discriminative knowledge across the hierarchical structure. Finally, multiple regional subsets are merged to form a global augmented dataset, while multi-level parallel sub-models are stacked to organize a large-scale deep ensemble network. More importantly, only the multiple DNNs in the last level are ensembled to generate the decision result of the proposed PH-E-DNN. Extensive experiments demonstrate that the PH-E-DNN is superior to some traditional and deep learning models, only requiring a few parameters to be set, which demonstrates its efficiency and flexibility.
External IDs:dblp:journals/symmetry/JiangDLZZ25
Loading