Decision: conferencePoster
Abstract: EDML is a recently proposed algorithm for learning parameters in
Bayesian networks. It was originally derived in terms of approximate
inference on a meta-network which underlies the Bayesian approach to
parameter estimation. While this initial derivation helped discover
EDML in the first place and provided a concrete context for
identifying some of its properties (e.g., in contrast to EM), the
formal setting was somewhat tedious in the number of concepts it drew
on. In this paper, we propose a greatly simplified perspective on
EDML which casts it as a general approach to continuous
optimization. The new perspective has several advantages. First, it
makes immediate some results that were non-trivial to prove
initially. Second, it facilitates the design of EDML algorithms for
new graphical models, leading to a new algorithm for learning
parameters in Markov networks. We derive this algorithm in this
paper, and provide an empirical comparison with a commonly used
gradient method, showing that EDML can find better estimates several
times faster.
3 Replies
Loading