Minimally Invasive Morphology Adaptation via Parameter Efficient Fine-Tuning

Published: 23 Oct 2024, Last Modified: 04 Nov 2024CoRL 2024 Workshop MAPoDeLEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Morphology Agnostic; Reinforcement Learning; Parameter Efficient Fine Tuning
TL;DR: We compare multiple parameter efficient fine-tuning methods for morphology agnostic reinforcement learning.
Abstract: Learning reinforcement learning policies to control individual robots is often computationally non-economical because minor variations in robot morphology (e.g. dynamics or number of limbs) can negatively impact policy performance. This limitation has motivated morphology agnostic policy learning, in which a monolithic deep learning policy learns to generalize between robotic morphologies. Unfortunately, these policies still have sub-optimal zero-shot performance compared to end-to-end finetuning on target morphologies. This limitation has ramifications in practical robotic applications, as online finetuning large neural networks can require immense computation. In this work, we investigate \textit{parameter efficient finetuning} techniques to specialize morphology-agnostic policies to a target robot that minimizes the number of learnable parameters adapted during online learning. We compare direct finetuning, which update subsets of the base model parameters, and input-learnable approaches, which add additional parameters to manipulate inputs passed to the base model. Our analysis concludes that tuning relatively few parameters (0.01\% of the base model) can measurably improve policy performance over zero shot. These results serve a prescriptive purpose for future research for which scenarios certain PEFT approaches are best suited for adapting policy's to new robotic morphologies.
Submission Number: 5
Loading