Keywords: Structured Probability Spaces, Neurosymbolic, Neural-Symbolic, Deep Learning, Human Activity Recognition
Track: Main Track
Abstract: This paper examines the impact of neurosymbolic learning on sequence analysis in Structured Probability Spaces (SPS), comparing its effectiveness against a purely neural approach. Sequence analysis in SPS is challenging due to the combinatorial explosion of states and the difficulty of obtaining sufficient annotated training samples. Additionally, in SPS, the set of realizations with non-zero support is often a scattered, non-trivial subset of the Cartesian product of variables, adding complexity to learning and inference. The problem of sequence analysis in SPS emerges, for example, in reconstructing the activities of goal-directed agents from noisy and ambiguous sensor data. We explore the potential of neurosymbolic methods, which integrate symbolic background knowledge with neural learning, to constrain the hypothesis space and improve learning efficiency. Specifically, we conduct a simulation study in human activity recognition using DeepProbLog as a representative for neurosymbolic learning. Our results demonstrate that incorporating symbolic knowledge improves sample efficiency, generalization, and zero-shot learning, compared to a purely neural approach. Furthermore, we show that neurosymbolic models maintain robust performance under data scarcity while offering enhanced interpretability and stability. These findings suggest that neurosymbolic learning provides a promising foundation for sequence analysis in complex, structured domains, where purely neural approaches struggle with insufficient training data and limited generalization ability.
Paper Type: Long Paper
Submission Number: 36
Loading