Imprecise Gaussian discriminant classificationOpen Website

2021 (modified: 14 Jun 2021)Pattern Recognit. 2021Readers: Everyone
Abstract: Highlights • We robustify Gaussian discriminant analysis by considering sets of estimates. • We use near-ignorance priors to derive bounding boxes on mean estimates. • We discuss the computational issue for generic and diagonal covariance matrices. • We make a full experimental study showing the benefits of using imprecise estimates. • We make a first exploration of the benefits of using the model in non i.i.d. situations. Abstract Gaussian discriminant analysis is a popular classification model, that in the precise case can produce unreliable predictions in case of high uncertainty (e.g., due to scarce or noisy data). While imprecise probability theory offers a nice theoretical framework to solve such issues, it has not been yet applied to Gaussian discriminant analysis. This work remedies this, by proposing a new Gaussian discriminant analysis based on robust Bayesian analysis and near-ignorance priors. The model delivers cautiouspredictions, in form of set-valued class, in case of limited or imperfect available information. We present and discuss results of experimentation on real and synthetic datasets, where for this latter we corrupt the test instance to see how our approach reacts to non i.i.d. samples. Experiments show that including an imprecise component in the Gaussian discriminant analysis produces reasonably cautious predictions, and that set-valued predictions correspond to instances for which the precise model performs poorly.
0 Replies

Loading