Keywords: neural inhibition, deep learning, gating, mixture of experts, dynamic routing
TL;DR: We propose global neural inhibition network and empirically shows improvements across mixture-of-experts architectures, particularly in multi-task reinforcement learning.
Abstract: Title: DYNAMICALLY GATED MIXTURE OF EXPERTS FOR MULTI-TASK REINFORCEMENT LEARNING
Multi-task reinforcement learning (MTRL) promises unique strengths against single-task RL because of generalization across tasks through parameter sharing and composition. Existing methods rely on local routing or static compositional weights in Mixture-of-Experts (MoE) without the ability to adapt to evolving temporal context. To improve the inherently temporal and dynamic systems in MTRL, we introduce a global recurrent inhibition network (GRIN) that performs dynamic gating across time, selectively modulating expert activations based on temporally accumulated context. Our formulation propagates information across time steps to preserve global activation information across the model. Notably, using the gating approach, we found statistically significant improvements over state-of-the-art MTRL methods, with an empirical +3.7% improvement on the Metaworld MT50 benchmark.
Primary Area: reinforcement learning
Submission Number: 3008
Loading