SPACE-HOP: Spacecraft 6-DoF Pose Estimation using Embedding-Predictive Pretraining and Hopf Map
Keywords: Joint Embedding Predictive Architecture, 6-DoF Pose Estimation, Spacecraft Pose Estimation, Hopf fibration, Unsupervised Learning
TL;DR: We implement JEPA pre-training for 6-DoF spacecraft pose estimation, highlighting single-level JEPA flaws. We achieve sub-degree accuracy via a classification-and-refinement head using a discrete Hopf grid and continuous Lie offsets.
Abstract: Vision-based methods capable of determining the 6-Degrees of Freedom (6-DoF) pose of a spacecraft are crucial for rendezvous, proximity operations, and docking. These operations involve a series of orbital maneuvers required to achieve orbital synchronization and precise orientation alignment.
Despite significant progress, existing methods exhibit two major limitations. First, reconstruction-driven objectives often prioritize pixel-level fidelity, neglecting the physically meaningful geometric structure embedded in semantic representations. Second, deterministic pose regression methods are inherently sensitive to noise and prone to convergence instability. Although continuous probabilistic modeling such as the Matrix Fisher distribution captures spatial uncertainty, its computational complexity is impractical.
To address these challenges, we propose SPACE-HOP which leverages masking-based self-supervised joint embedding-predictive architecture (JEPA) for spacecraft pose estimation. The method pre-trains an encoder-predictor Vision Transformer to learn geometry-aware representations. We then present a novel method of estimating rotation as a classification task over a discrete distribution of the SO(3) manifold, and further reduce quantization error to mimic a continuous manifold. The core mechanism involves classifying the orientation across a uniform Hopf fibration grid of SO(3) anchors and achieving the associated accurate offsets. We demonstrate that our method despite being a foundational study, performs comparable with SOTA regression models, even without extensive external pretraining or Test-Time Adaptation.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 36
Loading