Combination of Information in Labeled and Unlabeled Data via Evidence Theory

Published: 01 Jan 2024, Last Modified: 11 Apr 2025IEEE Trans. Artif. Intell. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: For classification with few labeled and massive unlabeled patterns, co-training, which uses information in labeled and unlabeled data to classify query patterns, is often employed to train classifiers in two distinct views. The classifiers teach each other by adding high-confidence unlabeled patterns to training dataset of the other view. Whereas, the direct adding often leads to some negative influence when retraining classifiers because some patterns with wrong predictions are added into training dataset. The wrong predictions must be considered for performance improvement. To this end, we present a method called Combination of Information in Labeled and Unlabeled (CILU) data based on evidence theory to effectively extract and fuse complementary knowledge in labeled and unlabeled data. In CILU, patterns are characterized by two distinct views, and the unlabeled patterns with high-confidence predictions are first added into the other view. We can train two classifiers by few labeled training data and high-confidence unlabeled patterns in each view. The classifiers are fused by evidence theory, and their weights which aim to reduce the harmful influence of wrong predictions are learnt by constructing an objection function on labeled data. There exist some complementary information between two distinct views, so the fused classifiers in two views are also combined. In order to extract more useful information in unlabeled data, semi-supervised Fuzzy C-mean clustering paradigm is also employed to yield clustering results. For a query pattern, the classification results and clustering results obtained by combined classifiers and clustering partition are integrated to make final class decision.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview