Investigating Generalization Behaviours of Generative Flow Networks

Published: 17 Jun 2024, Last Modified: 19 Jul 20242nd SPIGM @ ICML OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generative Flow Networks, GFlowNets, Generative Models, Generalization
TL;DR: We empirically investigate some of the hypothesized mechanisms of generalization in GFlowNets and find that the functions that GFlowNets learn to approximate have structure favourable for generalization.
Abstract: Generative Flow Networks (GFlowNets, GFNs) are a generative framework for learning unnormalized probability mass functions over discrete spaces. Since their inception, GFlowNets have proven to be useful for learning generative models in applications where the majority of the discrete space is unvisited during training. This has inspired some to hypothesize that GFlowNets, when paired with deep neural networks (DNNs), have favourable _generalization_ properties. In this work, we empirically verify some of the hypothesized _mechanisms_ of generalization of GFlowNets. In particular, we find that the functions that GFlowNets learn to approximate have an implicit underlying structure which facilitate generalization. We also find that GFlowNets are sensitive to being trained offline and off-policy; however, the reward implicitly learned by GFlowNets is robust to changes in the training distribution.
Submission Number: 86
Loading