Abstract: Ensemble learning has emerged as a pivotal area of interest within the machine learning community, consistently delivering superior performance across various predictive tasks. From the outset, diversity in the ensemble is a critical factor in the exceptional performance of these models. In the ensemble learning methodology, efforts to enhance ensemble performance have predominantly focused on increasing diversity at the data level. It requires large amounts of training data to avoid overfitting for neural networks. This paper proposes a novel diversity-enhanced strategy for neural network ensembles to create more diversity through Parameter Diversification (PaD). In particular, we introduce a regular term alongside a controlling parameter into the training loss function of each constituent network. This innovation enables us to cultivate a higher degree of diversity within the ensemble while concurrently maintaining the accuracy of the individual model. Our critical insight is straightforward: to encourage diversity in an ensemble by inducing other models to deviate from the optimal model, i.e., we desire the output of each network to be diverse. We validated our approach on multiple machine learning datasets and simulation datasets. The experimental results indicate that the proposed approach effectively creates favorable diversity for the ensemble, thereby endowing it with promising generalization capabilities.
Loading