Reinforcement Learning for Location-Aware Warehouse SchedulingDownload PDF

04 Mar 2022, 07:18ICLR 2022 GPL PosterReaders: Everyone
Keywords: Scheduling, Resource Management, Multi-Agent Reinforcement Learning, Planning, Proximal Policy Optimization
TL;DR: We propose a compact representation for the state-action space of agents in warehouse scheduling environments, and we show resilient performance across different environments.
Abstract: Recent techniques in dynamical scheduling and resource management have found applications in warehouse environments due to their ability to organize and prioritize tasks in a higher temporal resolution. The rise of deep reinforcement learning, as a learning paradigm, has enabled decentralized agent populations to discover complex coordination strategies. However, training multiple agents simultaneously introduce many obstacles in training as observation and action spaces become exponentially large. In our work, we experimentally quantify how various aspects of the warehouse environment (e.g., floor plan complexity, information about agents’ live location, level of task parallelizability) affect performance and execution priority. To achieve efficiency, we propose a compact representation of the state and action space for location-aware multi-agent systems, wherein each agent has knowledge of only self and task coordinates, hence only partial observability of the underlying Markov Decision Process. Finally, we show how agents trained in certain environments maintain performance in completely unseen settings and also correlate performance degradation with floor plan geometry.
1 Reply