Category-Level 6D Object Pose Estimation in Agricultural Settings Using a Lattice-Deformation Framework and Diffusion-Augmented Synthetic Data

Published: 23 Jun 2025, Last Modified: 23 Jun 2025Greeks in AI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 6D Object Pose Estimation, Category-Level Pose Estimation, Agricultural Robotics, Lattice-Based Deformation, Synthetic Data Augmentation, RGB-based Pose Estimation, Deep Learning in Agriculture, Mesh Deformation, Diffusion Models, Computer Vision in Agriculture, Object Shape Variability, Vision and Learning
TL;DR: PLANTPose estimates 6D pose and shape deformations of fruits from RGB images using a single CAD model and diffusion-enhanced synthetic data.
Abstract: Accurate 6D object pose estimation is essential for robotic grasping and manipulation, particularly in agriculture, where fruits and vegetables exhibit high intra-class variability in shape, size, and texture. The vast majority of existing methods rely on instance-specific CAD models or require depth sensors to resolve geometric ambiguities, making them impractical for real-world agricultural applications. In this work, we introduce PLANTPose, a novel framework for category-level 6D pose estimation that operates purely on RGB input. PLANTPose predicts both the 6D pose and deformation parameters relative to a base mesh, allowing a single category-level CAD model to adapt to unseen instances. This enables accurate pose estimation across varying shapes without relying on instance specific data. To enhance realism and improve generalization, we also leverage Stable Diffusion to refine synthetic training images with realistic texturing, mimicking variations due to ripeness and environmental factors and bridging the domain gap between synthetic data and the real world. Our evaluations on a challenging benchmark that includes bananas of various shapes, sizes, and ripeness status demonstrate the effectiveness of our framework in handling large intraclass variations while maintaining accurate 6D pose predictions, significantly outperforming the state-of-the-art RGB-based approach MegaPose. A preprint of this job has been submitted to arXiv in the following link: https://arxiv.org/abs/2505.24636 Keywords: Robotics and Embodied AI, Vision and Learning
Submission Number: 125
Loading