Keywords: Symbolic Reinforcement Learning, Temporal Abstraction, Monte Carlo Tree Search (MCTS), Event-driven, Allen’s Interval Algebra
TL;DR: Learning interpretable temporal rules with MCTS allows interpretable decision-making in event-driven environments through a hybrid symbolic–reactive framework.
Abstract: Many real-world environments, from smart homes to industrial systems, produce asynchronous event streams driven by latent activities with complex temporal structures. Recognizing these patterns requires reasoning over temporal dependencies that reactive policies alone do not capture. We propose a hybrid reinforcement learning framework that combines symbolic program synthesis with reactive policy optimization for interpretable activity recognition. This hybrid approach enables the agent to disambiguate overlapping activities, generalize across history patterns, and maintain interpretable decision logic. Our method discovers temporal rules as logical abstractions over event histories, using a compositional grammar based on Allen’s interval algebra. Monte Carlo Tree Search (MCTS) explores the rule space, refining candidates to maximize cumulative reward. The resulting rules define symbolic contexts that augment the observable state and support decision-making in a near-Markovian surrogate process. Evaluations on a synthetic benchmark with concurrent, asynchronous activities show strong task performance and symbolic fidelity compared to neural and evolutionary baselines.
Confirmation: I understand that authors of each paper submitted to EWRL may be asked to review 2-3 other submissions to EWRL.
Serve As Reviewer: ~Ivelina_Stoyanova2
Track: Regular Track: unpublished work
Submission Number: 96
Loading