Applying Ensembles of Multilinear Classifiers in the Frequency DomainDownload PDFOpen Website

2006 (modified: 11 Nov 2022)CVPR (1) 2006Readers: Everyone
Abstract: Ensemble methods such as bootstrap, bagging or boosting have had a considerable impact on recent developments in machine learning, pattern recognition and computer vision. Theoretical and practical results alike have established that, in terms of accuracy, ensembles of weak classifiers generally outperform monolithic solutions. However, this comes at the cost of an extensive training process. The work presented in this paper results from projects on advanced human machine interaction. In scenarios like ours, online learning is a major requirement, and lengthy training is prohibitive. We therefore propose a different approach to ensemble learning. Instead of a set of weak classifiers, we combine strong, separable, multilinear discriminant functions. These are especially suited for computer vision: they train very quickly and allow for rapid classification of image content. Training different classifiers for different contexts or on semantically organized data provides ensembles of experts. We collapse a set of experts into a single multilinear function and thus achieve the same runtime for arbitrarily many classifiers as for a single one. Moreover, carrying out the classification in the frequency domain results in faster framerates. Experiments with image sequences recorded in typical home environments show that our ensemble training schemes yield high accuracy on unconstrained and cluttered data.
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview