Keywords: Transparent objects, Shape recognition, Object manipulation
TL;DR: This paper proposes a novel framework for recognizing and manipulating partially observed transparent tableware objects.
Abstract: Recognizing and manipulating transparent tableware from partial view RGB image observations is made challenging by the difficulty in obtaining reliable depth measurements of transparent objects. In this paper we present the Transparent Tableware SuperQuadric Network (T$^2$SQNet), a neural network model that leverages a family of newly extended deformable superquadrics to produce low-dimensional, instance-wise and accurate 3D geometric representations of transparent objects from partial views. As a byproduct and contribution of independent interest, we also present TablewareNet, a publicly available toolset of seven parametrized shapes based on our extended deformable superquadrics, that can be used to generate new datasets of tableware objects of diverse shapes and sizes. Experiments with T$^2$SQNet trained with TablewareNet show that T$^2$SQNet outperforms existing methods in recognizing transparent objects, in some cases by significant margins, and can be effectively used in robotic applications like decluttering and target retrieval.
Supplementary Material: zip
Video: https://www.youtube.com/watch?v=6m5ZOrbSxxI
Website: https://t2sqnet.github.io/
Code: https://github.com/seungyeon-k/T2SQNet-public
Publication Agreement: pdf
Student Paper: yes
Spotlight Video: mp4
Submission Number: 532
Loading