MULTIMODAL DETECTION OF FIREBALL EVENTS USING GEOSTATIONARY LIGHTNING MAPPER DATA AND GROUND-BASED ALL-SKY IMAGERY

Published: 01 Oct 2025, Last Modified: 13 Nov 2025RISEx PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: fireball detection, multimodal learning, machine learning, clustering, satellite imagery, ground-based camera, event classification, GLM, GFO, XGBoost, PyTorch, data integration
TL;DR: A multimodal machine learning pipeline using satellite and ground-based data improves fireball event detection and classification.
Abstract: Detecting atmospheric fireball events is challenging due to fragmented coverage and heterogeneous data sources. This work presents a multimodal machine learning pipeline that integrates satellite-based Geostationary Lightning Mapper (GLM) metadata with ground-based all-sky imagery from networks such as DFN and GFO. Foundational analyses included clustering NASA Fireball API events by altitude, velocity, and energy, revealing intrinsic event patterns. An attempted cross-matching of real GLM and ground data using spatio-temporal buffers (500 km, 5 min) found no direct matches, highlighting integration complexity. Therefore, initial model development used synthetic multimodal data to validate architecture and feature fusion strategies, combining ResNet18 image encodings with tabular features in PyTorch and XGBoost frameworks. The prototype achieved 60% accuracy and a 0.67 F1-score for fireball classification, identifying energy, confidence, and brightness as discriminative features. These results establish a baseline, with future efforts aimed at refining fusion techniques and achieving robust real-world data integration for operational fireball detection.
Submission Number: 60
Loading