SingRef6D: Monocular Novel Object Pose Estimation with a Single RGB Reference

Anonymous Submission
Pipeline

Visualized pipeline for inference (top), the details of our depth model, matching process, and highlights (bottom). During inference, our fine-tuned depth model first estimates the metric depth accurately, which can deal with challenging surfaces. Subsequently, the proposed depth-aware matching utilizes depth value as spatial cues to establish correspondences even in low-textured regions. Then, the relative pose \(\mathbf{T}_{q\rightarrow r}\) can be solved with a point cloud registration model. Finally, the 6D pose for the query object can be calculated with \(\mathbf{T}_{q}^{-1} = \mathbf{T}_{r}^{-1} \mathbf{T}_{q\rightarrow r}\).

Abstract

Recent 6D pose estimation methods demonstrate notable performance but still face some practical limitations. For instance, many of them rely heavily on sensor depth, which may fail with challenging surface conditions, such as transparent or highly reflective materials. In the meantime, RGB-based solutions provide less robust matching performance in low-light and texture-less scenes due to the lack of geometry information. Motivated by these, we propose SingRef6D, a lightweight pipeline requiring only a single RGB image as a reference, eliminating the need for costly depth sensors, multi-view image acquisition, or training view synthesis models and neural fields. This enables SingRef6D to remain robust and capable even under resource-limited settings where depth or dense templates are unavailable. Our framework incorporates two key innovations. First, we propose a token-scaler-based fine-tuning mechanism with a novel optimization loss on top of Depth-Anything v2 to enhance its ability to predict accurate depth, even for challenging surfaces. Our results show a 14.41% improvement (in \(\delta_{1.05}\)) on REAL275 depth prediction compared to Depth-Anything v2 (with fine-tuned head). Second, benefiting from depth availability, we introduce a depth-aware matching process that effectively integrates spatial relationships within LoFTR, enabling our system to handle matching for challenging materials and lighting conditions. Evaluations of pose estimation on the REAL275, ClearPose, and Toyota-Light datasets show that our approach surpasses state-of-the-art methods, achieving a 6.1% improvement in average recall.

Demo Video of Depth Prediction

Demo Video of Pose Estimation