Keywords: efficient fine-tuning, domain adaptation, robotic foundation model, LoRA, hypernetworks
Abstract: This paper investigates how to efficiently adapt a pre-trained robotic foundation model to a new domain with many different tasks to solve.
We introduce Hyper-LoRA, a method built upon LoRA and Hypernetworks (HNs), to make this domain adaptation process both parameter-efficient via low-rank adaptation, and data-efficient by sharing knowledge across tasks in the target domain via the HN.
By training Hyper-LoRA on a moderate number of multi-task demonstrations from the target domain, we achieve not only significantly better performance on the training tasks, but also promising zero-shot generalization to unseen tasks.
Submission Number: 80
Loading