Keywords: generative adversarial networks, imitation learning, online learning, game theory
TL;DR: Online learning for generative adversarial networks
Abstract: Game-theoretic models of learning are a powerful set of models that optimize multi-objective architectures. Among these models are zero-sum architectures that have inspired adversarial learning frameworks. We extend these two-player frameworks by introducing a mediating neural agent whose role is to augment the observation of the players to achieve certain maximum entropic objectives.
We show that the new framework can be utilized for 1) efficient online training in multi-modal and adaptive environments and 2) addressing the ergodic convergence and cyclic dynamics issues of adversarial training. We also note the proposed training framework resembles the ‘follow the perturbed leader’ learning algorithms where perturbations are the result of actions of the mediating agent.
We validate our theoretical results by applying them to the games with convex and non-convex loss as well as generative adversarial architectures. Moreover, we customize the implementation of this algorithm for adversarial imitation learning applications where we validate our assertions by using a procedurally generated game environment as well as synthetic data.
1 Reply
Loading