Keywords: continual learning, multi-agent, overcooked, benchmark, reinforcement learning, cooperation
TL;DR: The first benchmark for continual multi-agent reinforcement learning on Overcooked
Abstract: Benchmarks play a crucial role in the development and analysis of reinforcement learning (RL) algorithms, with environment availability strongly impacting research. One particularly underexplored intersection is continual learning (CL) in cooperative multi-agent settings. To remedy this, we introduce **MEAL** (**M**ulti-agent **E**nvironments for **A**daptive **L**earning), the first benchmark tailored for continual multi-agent learning. Existing CL benchmarks run environments on the CPU, leading to computational bottlenecks and limiting the length of task sequences. MEAL leverages JAX for GPU acceleration, enabling continual learning across sequences of up to 100 tasks on a standard desktop PC within a few hours. Evaluating popular CL and MARL methods reveals that naïvely combining them fails to preserve network plasticity or prevent catastrophic forgetting of cooperative behaviors.
Confirmation: I understand that authors of each paper submitted to EWRL may be asked to review 2-3 other submissions to EWRL.
Serve As Reviewer: ~Tristan_Tomilin1
Track: Regular Track: unpublished work
Submission Number: 172
Loading