Automated assembly skill acquisition and implementation through human demonstrationOpen Website

2018 (modified: 05 Jun 2020)Robotics Auton. Syst. 2018Readers: Everyone
Abstract: Highlights • This paper develops an overall framework for robot skill learning through human demonstration. • A Portable Assembly Demonstration (PAD) system is developed as the learning platform. • Both human motion and object information are considered for action recognition. • Assembly states are estimated based on the 3D part models created by a 3D scanner. • The overall framework is evaluated on a Baxter robot. Abstract Acquiring robot assembly skills through human demonstration is an important research problem and can be used to quickly program robots in future manufacturing industries. To teach robots complex assembly skills, the robots should be able to recognize the objects (parts and tools) involved, the actions applied, and the effect of the actions on the parts. It is non-trivial to recognize the subtle assembly actions. To estimate the effect of the actions on the assembly part is also challenging due to the small part sizes. In this paper, using a RGB-D camera, we build a Portable Assembly Demonstration (PAD) system which can automatically recognize the objects (parts/tools) involved, the actions conducted and the assembly states characterizing the spatial relationship among the parts. The experiment results proved that this PAD system can generate a high level assembly script with decent accuracy in object and action recognition as well as assembly state estimation. The assembly script is successfully implemented on a Baxter robot.
0 Replies

Loading