SatNet: A Benchmark for Satellite Scheduling OptimizationDownload PDF

Published: 16 Dec 2021, Last Modified: 05 May 2023ML4OR-22 PosterReaders: Everyone
Keywords: reinforcement learning, benchmark, dataset, space applications, optimization, scheduling, operations research, planning, linear programming
TL;DR: SatNet is a benchmark (dataset, baseline results, initial deep RL implementation) for satellite scheduling problems that is based off historical requests from NASA's Deep Space Network.
Abstract: Satellites provide essential services such as networking and weather tracking, and the number of satellites are expected to grow rapidly in the coming years. Communications with terrestrial ground stations is one of the critical functionalities of any space mission. Satellite scheduling is a problem that has been scientifically investigated since the 1970s. A central aspect of this problem is the need to consider bandwidth resource contention and satellite visibility constraints as they require line of sight. Due to the combinatorial nature of the problem, prior solutions such as linear programs and evolutionary algorithms require extensive compute capabilities to output a feasible schedule for each scenario. Machine learning based scheduling can provide an alternative solution by training a model with historical data and generating a schedule quickly with model inference. We present SatNet, a benchmark for satellite scheduling optimization based on historical data from the NASA Deep Space Network. We propose formulation of the satellite scheduling problem as a Markov Decision Process and use reinforcement learning (RL) policies to generate schedules. The nature of constraints imposed by SatNet differ from other combinatorial optimization problems such as vehicle routing studied in prior literature. Our initial results indicate that RL is an alternative optimization approach that can generate candidate solutions of comparable quality to existing state-of-the-practice results. However, we also find that RL policies overfit to the training dataset and do not generalize well to new data, thereby necessitating continued research on reusable and generalizable agents.
1 Reply

Loading