Paper Link: https://openreview.net/forum?id=-wc0jAX3vrc
Paper Type: Short paper (up to four pages of content + unlimited references and appendices)
Abstract: Token classification is a fundamental subject in computational linguistics. Token classification models, like other modern deep neural network models, are usually trained on the entire training set in each epoch, while research has found the entirety of the training data may not be needed in later epochs of training. Moreover, over-training on data that are properly handled may poison the model. Inspired by human pedagogy, we propose a teacher-aware learning structure for token classification models. After each epoch of training, the teacher selects data it is uncertain of and data it predicts differently from the student, which are passed into the structure for training in the next epoch. As a proof of concept, we use a Bayesian linear classifier as the teacher and two commonly used backbone models as the student. Experiments show our method reduces the number of training iterations and improves model performance in most cases.
0 Replies
Loading