Abstract: Global Navigation Satellite Systems (GNSS) play a fundamental role in positioning, navigation, and geosciences, offering reliable solutions under ideal signal conditions. However, challenges such as signal degradation, atmospheric disturbances, and multipath effects often impact accuracy and reliability. Conventional GNSS positioning approaches depend heavily on predefined models that leverage satellite geometry and signal characteristics. While effective under ideal conditions, these methods struggle in complex environments and are often constrained by rigid assumptions about noise and error behavior. Recent developments in machine learning (ML) offer a promising
alternative, introducing data-driven adaptability and the ability to learn complex error patterns directly from observational data. Although ML has long been used to advance GNSS applications, a comprehensive assessment of the techniques used, their effectiveness and limitations, and an overview of recent ML-based GNSS applications is still lacking. In this paper, we systematically review the landscape of ML techniques applied to GNSS, ranging from traditional approaches to modern deep architectures and emerging paradigms. Unlike earlier reviews that often assume familiarity with ML, this work offers a brief yet comprehensive overview of the underlying principles, contextualized within GNSS applications. We further examine their use cases in signal classification, error mitigation, and positioning enhancement, while summarizing the datasets employed across
the literature. Our findings highlight persistent challenges, including poor generalization across environments,
limited annotated data, model interpretability, and deployment constraints on edge devices. We conclude with
recommendations for future work, stressing the importance of standardized benchmarks, multi-sensor datasets,
and adaptive, resource-efficient models to advance reliable, scalable, and intelligent GNSS systems.
Loading