A study on integrating distinct classifiers with bidirectional LSTM for Slot Filling task

Published: 01 Jan 2018, Last Modified: 20 May 2025KSE 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In spite of being investigated for decades, in Spoken Language Understanding, slot filling task perceived as sequential labeling in a specific domain is still challenging and attractive to many researchers. For this task, Recurrent Conditional Random Field (RCRF) is a popular model in order to learn latent representations of data which are then utilized as input to a classifier CRF. Our proposed model, in contrast, employed a variant of RNNs, called Long Short-Tem Memory Networks (LSTMs) which, more or less, tackle the downside of RNNs: vanishing gradients. Additionally, we also conducted experiments on the integration of bidirectional LSTM with distinct classifiers, e.g CRFs, SVMs; which then are trained simultaneously. The experimental results show that these combinations are beneficial on both dataset Airline Travel Information System (ATIS) and DARPA Communicator, compared with the state-of-the-art model.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview