PHYSICAL DERIVATIVES: COMPUTING POLICY GRADIENTS BY PHYSICAL FORWARD-PROPAGATIONDownload PDF

Anonymous

09 Feb 2023 (modified: 03 Mar 2023)Submitted to Physics4MLReaders: Everyone
Keywords: Task-agnostic Reinforcement Learning, Curiosity-driven RL, Control Theory
TL;DR: This work focuses on the sensitivity of the produced trajectories of a system with respect to the policy, so-called Physical Derivatives.
Abstract: Model-free and model-based reinforcement learning are two ends of a spectrum. Learning a good policy without a dynamic model can be prohibitively expensive. Learning the dynamic model of a system can reduce the cost of learning the policy, but it can also introduce bias if it is not accurate. We propose a middle ground where instead of the transition model, the sensitivity of the trajectories with respect to the perturbation of the parameters is learned. This allows us to predict the local behavior of the physical system around a set of nominal policies without knowing the actual model. We assay our method on a custom-built physical robot in extensive experiments and show the feasibility of the approach in practice. We investigate potential challenges when applying our method to physical systems and propose solutions to each of them.
0 Replies

Loading