A Variational Perspective on Generative Flow Networks

Published: 13 Apr 2023, Last Modified: 13 Apr 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Generative flow networks (GFNs) are a class of probabilistic models for sequential sampling of composite objects, proportional to a target distribution that is defined in terms of an energy function or a reward. GFNs are typically trained using a flow matching or trajectory balance objective, which matches forward and backward transition models over trajectories. In this work we introduce a variational objective for training GFNs, which is a convex combination of the reverse- and forward KL divergences, and compare it to the trajectory balance objective when sampling from the forward- and backward model, respectively. We show that, in certain settings, variational inference for GFNs is equivalent to minimizing the trajectory balance objective, in the sense that both methods compute the same score-function gradient. This insight suggests that in these settings, control variates, which are commonly used to reduce the variance of score-function gradient estimates, can also be used with the trajectory balance objective. We evaluate our findings and the performance of the proposed variational objective numerically by comparing it to the trajectory balance objective on two synthetic tasks.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/zmheiko/variational-perspective-on-gflownets
Assigned Action Editor: ~Jakub_Mikolaj_Tomczak1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 612