Abstract: Ensemble methods aggregate the predictions of multiple models by some form of weighted voting. In this work, we consider the impact of the choice of the assignment of voting power to every individual model on the performance of ensemble methods. We empirically and comparatively evaluate the accuracy and running time of the different power voting ensemble methods using standard classifiers and mainstream classification benchmarks. The results show that power ensemble voting outperforms the equal-power baseline, and that unsupervised learning of the voting power can be competitive with respect to supervised learning; within supervised approaches, learning voting power through Shapley values and regression outperforms simply using accuracy.
External IDs:doi:10.1007/978-3-032-02049-9_28
Loading