Keywords: space situational awareness, object detection and tracking, space image dataset, high resolution image
TL;DR: SpaceSet, a large-scale realistic space-based image dataset for space situational awareness and benchmark with SOTA object detection and tracking algorithms.
Abstract: Space situational awareness (SSA) plays an imperative role in maintaining safe space operations, especially given the increasingly congested space traffic around Earth. Space-based SSA offers a flexible and lightweight solution compared to traditional ground-based SSA. With advanced machine learning approaches, space-based SSA can extract features from high-resolution images in space to detect and track resident space objects (RSOs). However, existing spacecraft image datasets, such as SPARK, fall short of providing realistic camera observations, rendering the derived algorithms unsuitable for real SSA systems. In this research, we introduce SpaceSet, a large-scale realistic space-based image dataset for SSA. We consider accurate space orbit dynamics and a physical camera model with various noise distributions, generating images at the photon level. To extend the available observation window, four overlapping cameras are simulated with a fixed rotation angle. SpaceSet includes images of RSOs observed from $19 km$ to $63,000 km$, captured by a tracker operating in LEO, MEO, and GEO orbits over a period of $5,000$ seconds. Each image has a resolution of $4418 \times 4418$ pixels, providing detailed features for developing advanced SSA approaches. We split the dataset into three subsets: SpaceSet-100, SpaceSet-5000, and SpaceSet-full, catering to various image processing applications. The SpaceSet-full corpus includes a comprehensive data-loader with $781.5GB$ of images and $25.9MB$ of ground truth labels. We also benchmark detection and tracking algorithms on the SpaceSet-100 dataset using a specified splitting method to accelerate the training process.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9543
Loading