PIKing Neural Networks: Parallel Inference via Kernelizing. A Comparative Study of Tandem-Deployed Space AI Models Across Heterogeneous Hardware
Keywords: Parallel Inference, Heterogeneous Edge Hardware, Onboard Space AI
TL;DR: We evaluate how tandem perception and FDIR neural networks interact under parallel deployment across heterogeneous hardware representative of spaceborne compute constraints.
Abstract: Autonomous Fault Detection, Isolation, and Recovery (FDIR) is critical for spacecraft operating under strict onboard resource constraints. While lightweight anomaly detection models are effective for telemetry monitoring, their co-execution with heavier perception networks remains underexplored. We study tandem deployment of an LSTM-based anomaly detector with other upstream models such as PyNAS and YOLO11s for Earth Observation and Autonomous Navigation respectively. These models are deployed in tandem across five heterogeneous platforms, evaluating sequential and intermittent execution to reflect supervisory FDIR scheduling. To do this, we introduce the concept of PIKing (Parallel Inference via Kernelizing) the models. Using measured latency, computed FLOPs, and empirical roofline analysis, we show that tandem behavior is strongly hardware-dependent: accelerator-backed systems exhibit near-additive performance, whereas CPU-bound platforms are sensitive to burst-triggered contention. Introducing lightweight execution gaps significantly stabilizes intermittent performance without modifying model architecture, providing practical guidance for resource-constrained space deployment scenarios.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 43
Loading