Abstract: Artificial neural networks used for reinforcement learning are structurally rigid, meaning that each optimized parameter of the network is tied to its specific placement in the network structure. Structural rigidity limits the ability to optimize parameters of policies across multiple environments that do not share input and output spaces. This is a consequence of the number of optimized parameters being directly dependent on the structure of the network. In this paper, we present Structurally Flexible Neural Networks (SFNNs), which consist of connected gated recurrent units (GRUs) as synaptic plasticity rules and linear layers as neurons. In contrast to earlier work, SFNNs contain several different sets of parameterized building blocks. Here we show that SFNNs can overcome the challenging symmetry dilemma, which refers to the problem of optimizing units with shared parameters to each express different representations during deployment. In this paper, the same SFNN can learn to solve three classic control environments that have different input/output spaces. SFFNs thus represent a step toward a more general model capable of solving several environments at once.
Loading