Large-Scale Word Representation Features for Improved Spoken Language Understanding

Published: 04 Apr 2015, Last Modified: 18 Jun 20242015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)EveryoneCC BY 4.0
Abstract: Recently there has been great interest in the application of word representation techniques to various natural language processing (NLP) scenarios. Word representation features from techniques such as Brown clustering or spectral clustering are generally computed from large corpora of unlabeled data in a completely unsupervised manner. These features can then be directly included as supplementary features to standard representations used for NLP processing tasks. In this paper, we apply these techniques to the tasks of domain classification and intent detection in a spoken language understanding(SLU)system. In experiments in a personal assistant domain, features derived from both Brown clustering and spectral clustering techniques improved the performance of all models in our experiments and the combination of both techniques yielded additional improvements.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview