Towards Standards and Guidelines for Developing Open-Source and Benchmarking Learning for Robot Manipulation in the COMPARE Ecosystem
Keywords: open-source, benchmarking, standards
TL;DR: This paper provides a brief overview of the COMPARE ecosystem and initial considerations for its efforts to improve open-source and benchmarking of learning in robot manipulation.
Abstract: The Collaborative Open-source Manipulation Performance Assessment for Robotics Enhancement (COMPARE) ecosystem will enable the robot manipulation community to more effectively conduct research and evaluate system performance, with the goal of enabling the quantitative comparison of approaches. COMPARE will addresses pertinent issues in robot manipulation (e.g., modularity of software, quality control, and testing frameworks), conducting outreach to build participation, and activating the ecosystem through activities such as workshops and competitions. Given the diversity of open-source products available for robot manipulation -- including perception and planning packages, learning algorithms, datasets, benchmarking protocols, object sets, and hardware designs -- the goal of the ecosystem is to create a greater cohesion and compatibility between these products via community-driven standards that will allow increased modularity and easier implementation of these products that enables enhanced performance quantification. This paper provides a brief overview of COMPARE and initial considerations for its efforts to improve robot learning.
Submission Number: 48
Loading