- Keywords: subjective timescales, model-based reinforcement learning, episodic memory
- TL;DR: Using episodic memories to learn subjective-timescale models in RL agents
- Abstract: Planning in complex environments requires reasoning over multi-step timescales. However, in model-based learning, an agent’s model is more commonly defined over transitions between consecutive states. This leads to plans using intermediate states that are either unnecessary, or worse, introduce cumulative prediction errors. Inspired by the recent works on human time perception, we devise a novel approach for learning a transition dynamics model based on the sequences of episodic memories that define an agent's subjective timescale – over which it learns world dynamics and over which future planning is performed. We analyse the emergent benefits of the subjective-timescale model (STM) by incorporating it into two disparate model-based methods – Dreamer and deep active inference. Using 3D visual foraging tasks, we demonstrate that STM can systematically vary the temporal extent of its predictions and is more likely to predict future salient events (such as new objects coming into view). In comparison to the agents trained using objective timescales, STM agents also collect more rewards due to their ability to perform flexible planning and a more pronounced exploratory behaviour.