Rationalized Co-TrainingDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Co-training is a semi-supervised learning technique that leverages two views of the data. It trains a classifier for each view using a small set of labelled data and uses the classifiers to label training data for each other. Intuitively, co-training works by encouraging agreement between the classifiers; an idea exploited in co-regularization. In this work, we propose rationalized co-training: a variant of co-training that encourages agreement between the rationales of the classifiers' predictions. Experiments on two datasets showed that rationalized co-training reduces the error rates of the partially and fully supervised models by 32.3%. This error rate reduction outperformed that of vanilla co-training by 8.51%.
Paper Type: short
0 Replies

Loading