Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Factorial Hidden Markov Models for Learning Representations of Natural Language
Anjan Nepal, Alexander Yates
Dec 24, 2013 (modified: Dec 24, 2013)ICLR 2014 conference submissionreaders: everyone
Decision:submitted, no decision
Abstract:Most representation learning algorithms for language and image processing are local, in that they identify features for a data point based on surrounding points. Yet in language processing, the correct meaning of a word often depends on its global context. As a step toward incorporating global context into representation learning, we develop a representation learning algorithm that incorporates joint prediction into its technique for producing features for a word. We develop efficient variational methods for learning Factorial Hidden Markov Models from large texts, and use variational distributions to produce features for each word that are sensitive to the entire input sequence, not just to a local context window. Experiments on part-of-speech tagging and chunking indicate that the features are competitive with or better than existing state-of-the-art representation learning methods.
Enter your feedback below and we'll get back to you as soon as possible.