Flat random forest: a new ensemble learning method towards better training efficiency and adaptive model size to deep forest

Published: 14 May 2020, Last Modified: 11 Apr 2025International Journal of Machine Learning and CyberneticsEveryoneRevisionsCC BY 4.0
Abstract: The known defciencies of deep neural networks include inferior training efciency, weak parallelization capability, too many hyper-parameters etc. To address these issues, some researchers presented deep forest, a special deep learning model, which achieves some signifcant improvements but remain poor training efciency, infexible model size and weak interpretability. This paper endeavors to solve the issues in a new way. Firstly, deep forest is extended to the densely connected deep forest to enhance the prediction accuracy. Secondly, to perform parallel training with adaptive model size, the fat random forest is proposed by achieving the balance between the width and depth of densely connected deep forest. Finally, two core algorithms are respectively presented for the forward output weights computation and output weights updating. The experimental results show, compared with deep forest, the proposed fat random forest acquires competitive prediction accuracy, higher training efciency, less hyper-parameters and adaptive model size.
Loading