Latent Communication for Zero-shot Stitching in Reinforcement Learning

Published: 01 Aug 2024, Last Modified: 09 Oct 2024EWRL17EveryoneRevisionsBibTeXCC BY 4.0
Keywords: visual reinforcement learning, reinforcement learning, relative representation, zero-shot, stitching, latent communication
TL;DR: We enable compositionality between encoders and controllers of reinforcement learning policies trained end-to-end, without any modification to the existing rl training paradigm
Abstract: Visual Reinforcement Learning is a popular and powerful framework that takes full advantage of the Deep Learning breakthrough. It is known that variations in the input (e.g., different colors of the panorama due to the season of the year) or task (e.g., changing the target speed of a car) domains could disrupt agents performance, therefore requiring new training. Recent advancements in Latent Communication Theory, show that it is possible to combine components of different neural networks to create new models in a zero-shot fashion. In this paper, we leverage upon such advancements to show that components of agents trained on different visual and task variations can be combined by aligning the latent representations produced by their encoders, to obtain new agents that can act well in visual-task combinations never seen together during training. Our findings open to more efficient training processes, significantly reducing time and computational costs.
Submission Number: 57
Loading