An Implicit Relationship Extraction Model Based on Improved Attention and Gated Decoding for Intent Recognition and Slot Filling

Published: 01 Jan 2024, Last Modified: 12 Apr 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the context of natural language understanding for short texts, pipeline and joint learning models based on deep learning are two common approaches to address the challenges of intent recognition and slot filling. Due to the potential error propagation behavior of the pipeline model, its final effectiveness is often compromised by deviations in the initial stage. Although there have been some endeavors and contributions in the domain of intent recognition within joint learning, most methods do not explicitly focus on establishing the relationship between intent and slot. We propose a novel joint recognition model for intent and slot which incorporates an enhanced self-attention mechanism and a weight gating unit channel. This channel effectively extracts the correlation between intent and slot during the decoding process. Our model is evaluated on public datasets like ATIS and SNIPS, demonstrating an approximate 1.0% improvement in accuracy compared to the current mainstream models that utilize attention-based methods.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview