Simulation-to-Reality Transfer of a Two-Stage Deep Reinforcement Learning Controller for Autonomous Load Carrier Approaching

Abstract: Nowadays, transportation tasks in logistic and manufacturing are commonly performed by Automated Guided Vehicles (AGV). To enhance the driving and navigation capabilities of these AGVs agents based on Deep Reinforcement Learning (DRL) are extensively researched. Such agents are usually trained in an end-to-end fashion in simulations. However, when being placed in a real environment these agents often perform poorly. This effect is caused by the gap between simulation and reality, the so-called simulation-to-reality (sim-to-real) gap. In this work we mitigate this gap for a state-of-the-art DRL method designed to approach freely positioned load carriers based on RGB image data [1]. To study the transfer from simulation to reality, we utilize a monofork AGV, which is typically used in an industrial environment to transport load carriers. We show that introducing Domain Randomization (DR) techniques into the training enables a successful mitigation of the gap mentioned above. Additionally, we compensate the influence of an inaccurately placed camera, extending the used method, and show that it is applicable to other types of load carriers.
0 Replies
Loading