Abstract: A variety of NLP tasks require semantic information from sentence input. In order to capture meaning through Abstract Meaning Representation parsing, concept identification is required. In this research, the task of concept identification is approached using sequence-to-sequence models. Building upon an encoder-decoder architecture with attention mechanism, concepts are separated into verbs and non-verbs. Different alignment strategies are applied in order to increase the amount of train and test data for the model, resulting in a merged data-set with twice as many available instances. Furthermore, the training data increase induces an improvement of 10% for the sequence-to-sequence model.
Loading