Abstract: One of the major challenges in AI is teaching machines to respond precisely and use environmental functionalities, thereby achieving affordance awareness that humans possess. Despite its importance, the field has been lagging in learning, especially in 3D, as annotating affordance accompanies a laborious annotation process due to the numerous variations of human-object interaction. The low availability of affordance data limits the learning in terms of generalization for object categories, and also simplifies the representation of affordance, capturing only a fraction of the entire affordance. To overcome these challenges, we propose a novel, self-supervised method to generate the dataset of 3D affordance examples given only the 3D object input, without any manual annotating procedures. The method starts by capturing the 3D objects into images and creating 2D affordance examples by inserting humans into the image via inpainting diffusion models, where the Adaptive Mask algorithm is introduced to enable human insertion without harming the original details of the object. The method consequently lifts inserted humans back to 3D to create 3D human-object pairs, where the depth ambiguity is solved under a framework of virtual triangulation that utilizes pre-generated human postures from multiple viewpoints. We also provide a novel affordance representation defined on relative orientations and proximity between dense human and object points, that can be easily aggregated from any 3D HOI datasets. The proposed representation serves as a primitive that can be manifested to conventional affordance representations via simple transformations, ranging from physically exerted affordance (e.g., contact) to nonphysical ones (e.g., orientation tendency, spatial relations). We demonstrate the efficacy of our method and representation by generating the 3D affordance dataset and deriving high-quality affordance examples from the representation, including contact, orientation, and spatial occupancies.
Loading