Towards Natural Language-Driven Industrial Assembly Using Foundation Models

Published: 11 Mar 2024, Last Modified: 22 Apr 2024LLMAgents @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robotics Foundation Models, Large Language Models, Machine Learning, Industrial Robotics, Robotic Assembly
TL;DR: We present a global control policy based on LLMs that can transfer the control policy to a finite skill library that are specifically trained to perform high-precision tasks through dynamic context switching.
Abstract: Large Language Models (LLMs) and strong vision models have enabled rapid research and development in the field of Vision-Language-Action models that enable robotic control. The main objective of these methods is to develop a generalist policy that can control robots with various embodiments. However, in industrial robotic applications such as automated assembly and disassembly, some tasks, such as insertion, demand greater accuracy and involve intricate factors like contact engagement, friction handling, and refined motor skills. Implementing these skills using a generalist policy is challenging because these policies might integrate further sensory data, including force or torque measurements, for enhanced precision. In our method, we present a global control policy based on LLMs that can transfer the control policy to a finite set of skills that are specifically trained to perform high-precision tasks through dynamic context switching. The integration of LLMs into this framework underscores their significance in not only interpreting and processing language inputs but also in enriching the control mechanisms for diverse and intricate robotic operations.
Submission Number: 87
Loading