Provably Efficient Long-Horizon Exploration in Monte Carlo Tree Search through State Occupancy Regularization

Published: 17 Jul 2025, Last Modified: 06 Sept 2025EWRL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RL Theory, Convex Optimizataion, MCTS
TL;DR: Regularizing the state occupancy measure of a search tree produces tractable policies with provable exploration guarantees
Abstract: Monte Carlo tree search (MCTS) has been successful in a variety of domains, but faces challenges with long-horizon exploration when compared to sampling-based motion planning algorithms like Rapidly-Exploring Random Trees. To address these limitations of MCTS, we derive a tree search algorithm based on policy optimization with state occupancy measure regularization, which we call {\it Volume-MCTS}. We show that count-based exploration and sampling-based motion planning can be derived as approximate solutions to this state occupancy measure regularized objective. We test our method on several robot navigation problems, and find that Volume-MCTS outperforms AlphaZero and displays significantly better long-horizon exploration properties.
Confirmation: I understand that authors of each paper submitted to EWRL may be asked to review 2-3 other submissions to EWRL.
Serve As Reviewer: ~Liam_Schramm2
Track: Fast Track: published work
Publication Link: https://proceedings.mlr.press/v235/schramm24a.html
Submission Number: 156
Loading