- Keywords: dynamic computation graph, dynamic models, probabilistic programming
- Abstract: Many modern machine learning algorithms, such as automatic differentiation (AD) and versions of approximate Bayesian inference, can be understood as a particular case of message passing on some computation graph. To permit learning complex models, recent approaches to extract a computation graph focus on dynamic graphs using so-called operator overloading techniques. However, in contrast to source transformation, which is commonly used for static graphs, using operator overloading does not allow to analyse the graph and naturally leads to higher overhead costs. In this paper, we present a combination of source transformation and operator overloading, which allows extracting the underlaying computation graph of complex machine learning models, e.g.~Bayesian nonparametric models or models with stochastic control flow. To track the execution of operations in a machine learning model during run-time, we inject additional statements into the existing program at compile-time using the intermediate representation of the program. We introduce an extension of the well-known Wengert lists, used in many AD implementations, to record all necessary control flow information and any additional metadata. Finally, we discuss future applications of our approach, such as conjugacy exploitation in universal probabilistic programming languages, BUGS like Gibbs sampling for dynamic models and variational message-passing.