Evaluating Data Representations for Object Recognition During Pick-and-Place Manipulation TasksDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 17 May 2023SysCon 2022Readers: Everyone
Abstract: When manipulating objects, robots need to build a local and global description of the environment simultaneously. Recognizing objects and estimating their pose are examples of tasks expected from robots when operating in unstructured environments. An efficient solution to these tasks has the potential to increase robotic usage in such settings. This paper presents a study on the representation of tactile and joint-position data to recognize everyday objects. We performed 12 different experiments extracting features in different ways from a publicly available dataset. More specifically, this work uses three data representations, namely 3 Points, 10 Points, Average and Descriptive Statistics (DS) over 2 different sensor types (i.e., positional and tactile sensors separately) and the combination of both. Using these data representations, we trained and evaluated machine learning models in the object recognition task. Our findings support that tactile data and its combination with finger joint position information can be successfully used for object identification during manipulation tasks. The feature engineering approach used to represent the dataset used in this paper showed promising results regarding recognizing objects using a combination of tactile and joint-position information. Our exploratory analysis testing different data representations was crucial for improving objects’ recognition starting from a low of accuracy 30.31% (using data from the positional sensor only with sampled averages) to a high performance of 93.53% accuracy (using an Extra Tree classifier trained on data from all sensors with a DS representation).
0 Replies

Loading