Learning Latent Groups with Hinge-loss Markov Random FieldsDownload PDF

28 Mar 2024 (modified: 28 Apr 2013)ICML 2013 Inferning submissionReaders: Everyone
Decision: conferencePoster
Abstract: Probabilistic models with latent variables are powerful tools that can help explain related phenomena by mediating dependencies among them. Learning in the presence of latent variables can be difficult though, because of the difficulty of marginalizing them out, or, more commonly, maximizing a lower bound on the marginal likelihood. In this work, we show how to learn hinge-loss Markov random fields (HL-MRFs) that contain latent variables. HL-MRFs are an expressive class of undirected probabilistic graphical models for which inference of most probable explanations is a convex optimization. By incorporating latent variables into HL-MRFs, we can build models that express rich dependencies among those latent variables. We use a hard expectation-maximization algorithm to learn the parameters of such a model, leveraging fast inference for learning. In our experiments, this combination of inference and learning discovers useful groups of users and hashtags in a Twitter data set.
2 Replies

Loading