Abstract: Neural approaches to sequence labeling often use a Conditional Random Field (CRF) to model their output dependencies. We set out to establish Recurrent Neural Networks (RNNs) as an efficient alternative to CRFs especially in tasks with large number of output labels. We propose an adjusted actor-critic reinforcement learning algorithm to fine-tune RNN network (AC-RNN). Our comprehensive experiments suggest that AC-RNN efficiently matches the performance of the CRF on NER and CCG tagging, and outperforms it on Machine Transliteration; with an overall faster training time, and smaller memory footprint.
0 Replies
Loading