Generative Flow Networks or GFlowNets is a new paradigm of neural net training, developed at MILA since 2021.
GFlowNets are related to Monte-Carlo Markov chain methods (as they sample from a distribution specified by an energy function), reinforcement learning (as they learn a policy to sample composed objects through a sequence of steps), generative models (as they learn to represent and sample from a distribution) and amortized variational methods (as they can be used to learn to approximate and sample from an otherwise intractable posterior, given a prior and a likelihood). GFlowNet are trained to generate an object through a sequence of steps with probability proportional to some reward function (or with denoting the energy function), given at the end of the generative trajectory.[1]
Through generative models and variational inference, GFlowNets are also related to Active Inference....