Autonomous Perception and Onboard Intelligence for Space Missions: A Survey
Keywords: Autonomous Space Perception, Spacecraft Pose Estimation, Domain Adaptation, Onboard Edge Computing
Abstract: The integration of deep learning into space missions is revolutionizing autonomous operations, from planetary exploration to on-orbit servicing and Earth observation. However, deploying computer vision systems in space presents unique challenges, including stringent Size, Weight, Power, and Cost (SWaP-C) constraints, extreme illumination variations, and a profound ``sim-to-real'' domain gap due to the scarcity of in-orbit data. This paper provides a comprehensive review of the rapidly evolving landscape of autonomous space perception, structured around five core pillars: spacecraft pose estimation, multi-modal sensing, onboard edge computing, vision-based navigation, and mission robustness. We analyze recent advancements in lightweight architectures, event-based perception, and hardware-aware optimizations that enable real-time inference on radiation-tolerant edge accelerators. Furthermore, we examine the critical role of high-fidelity synthetic data generation and domain adaptation techniques in bridging the reality gap. Finally, we discuss open challenges such as adversarial robustness and uncertainty quantification, proposing a phased research road-map toward collaborative and certifiable in-orbit AI ecosystems.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 57
Loading