Abstract:In this paper, we study the problem of learning the structure of Markov Networks which permit efficient inference. We formulate our problem as an optimization problem to maximize the likelihood of the model such that the inference complexity on the resulting structure is bounded. The inference complexity is measured with respect to any chosen algorithm (either exact or approximate), or a distribution over any marginal or
conditional query. We relate our work to previous approaches for learning bounded tree-width models and arithmetic circuits. The main contribution of our work is to isolate the inference penalty from the incremental structure building process. We show that our algorithm can be used to learn networks which bound the inference time of both exact and approximate algorithms.
Enter your feedback below and we'll get back to you as soon as possible.