Entropy Weight Allocation: Positive-unlabeled Learning via Optimal TransportOpen Website

2022 (modified: 24 Apr 2023)SDM 2022Readers: Everyone
Abstract: Positive-unlabeled learning (PU learning) aims to deal with the problem that only a fraction of positive instances are known. Due to the absence of negative instances, ordinary learning models cannot be directly applied. Existing PU learning methods either explicitly choose some unlabeled instances as negative instances in advance or reformulate the task as a weighted learning problem. Since working in such an ad-hoc fashion, these methods often suffer a bad performance and only have limited usage. This paper proposes a novel instance-dependent weighting method entropy weight allocation (EWA) for PU learning by optimal transport (OT). More specifically, we allocate each unlabeled instance an elaborate weight indicating the possibility that it is an underlying negative instance. Then any ordinary weighted learning models can be used to obtain a PU classifier. By concatenating EWA with four celebrated classification models, we show that EWA is a broad-spectrum weighting method that can boost almost all the mainstream machine learning models for PU learning.
0 Replies

Loading