Keywords: Reinforcement Learning, Language Models, History Compression, Partial Observability, Foundation Models, Interpretability, Explainable AI
TL;DR: We introduce SHELM, a semantic and human-readable memory for Reinforcement learning and showcase its strengths on several memory-dependent environments.
Abstract: Reinforcement learning agents deployed in the real world often have to cope with partially observable environments.
Therefore, most agents employ memory mechanisms to approximate the state of the environment.
Recently, there have been impressive success stories in mastering partially observable environments, mostly in the realm of computer games like Dota 2, StarCraft II, or MineCraft.
However, existing methods lack interpretability in the sense that it is not comprehensible for humans what the agent stores in its memory.
In this regard, we propose a novel memory mechanism that represents past events in human language.
Our method uses CLIP to associate visual inputs with language tokens.
Then we feed these tokens to a pretrained language model that serves the agent as memory and provides it with a coherent and human-readable representation of the past.
We train our memory mechanism on a set of partially observable environments and find that it excels on tasks that require a memory component, while mostly attaining performance on-par with strong baselines on tasks that do not.
On a challenging continuous recognition task, where memorizing the past is crucial, our memory mechanism converges two orders of magnitude faster than prior methods.
Since our memory mechanism is human-readable, we can peek at an agent's memory and check whether crucial pieces of information have been stored.
This significantly enhances troubleshooting and paves the way toward more interpretable agents.
Submission Number: 12334
Loading