RL in context - towards a framing that enables cybernetics-style questions

Published: 04 Jun 2024, Last Modified: 19 Jul 2024Finding the Frame: RLC 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: cybernetics, continual learning, reinforcement learning
TL;DR: To enable tackling cybernetics-style questions in a learning-theoretic manner, we sketch a continual learning framing of RL which consists of simplified RL agents at various scales, such that more complex agents are composed of simpler ones.
Abstract: Early work in cybernetics and artificial intelligence was aimed as much at developing computational models that enable mathematical understanding of key principles in living organisms as it was at developing computing machines based on such models. Reinforcement learning of course traces its roots back to these early days as well, with significant development since then, both in terms of mathematical formalization and applied success. As we gather in this workshop to examine the conceptual foundations of RL and how these impact framing problems, we aim in this short paper especially in the direction of building computational models to capture and shed light on aspects of biological life. Given the generality of RL, we argue it is an especially fruitful paradigm to bring into a novel formulation of continual learning - one that may not immediately fit the bitter lesson's emphasis on scaling-above-all, but that we hope can be used to formulate cybernetics-style questions about the rich hierarchy of learning processes found in complex living systems. Strengthening this foundational thrust also holds promise to help make sense of safety issues in the ongoing deployment of AI.
Submission Number: 33
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview