Learning a Structured Neural Network Policy for a Hopping TaskDownload PDF

Published: 25 Jun 2020, Last Modified: 05 May 2023RobRetro 2020Readers: Everyone
Abstract: We published the journal paper Learning a Structured Neural Network Policy for a Hopping Task [1] roughly two years ago as a RAL journal paper with proceedings in IROS 2018. The paper is about learning a hopping motion on a single leg robot. The paper contributes a way to learn the dynamics of the system, how to optimize a hopping policy and two different ways to transfer the optimized policy to a neural network policy. The goal of one of the neural network policies, the feedback network policy, was to learn the feedback and feedforward gains. This allows to inspect the behavior of the policy by analyzing the outputs. In the following, I outline a few lessons learned that are not mentioned in the original paper. In addition, I am listing a few things I would do differently from today’s standpoint.
TL;DR: Retroperspective on Learning a Structured Neural Network Policy for a Hopping Task
Keywords: Retroperspective, Lessons learned
2 Replies

Loading