Uncertainty-Aware Reinforcement Learning for Risk-Sensitive Player Evaluation in Sports GameDownload PDF

Published: 31 Oct 2022, Last Modified: 14 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Reinforcement Learning, Uncertainty Estimation, Sports Analytic, Agent Evaluation
TL;DR: We design a data-driven RL framework that enables post-hoc calibrations on action values according to their aleatoric and epistemic uncertainties.
Abstract: A major task of sports analytics is player evaluation. Previous methods commonly measured the impact of players' actions on desirable outcomes (e.g., goals or winning) without considering the risk induced by stochastic game dynamics. In this paper, we design an uncertainty-aware Reinforcement Learning (RL) framework to learn a risk-sensitive player evaluation metric from stochastic game dynamics. To embed the risk of a player’s movements into the distribution of action-values, we model their 1) aleatoric uncertainty, which represents the intrinsic stochasticity in a sports game, and 2) epistemic uncertainty, which is due to a model's insufficient knowledge regarding Out-of-Distribution (OoD) samples. We demonstrate how a distributional Bellman operator and a feature-space density model can capture these uncertainties. Based on such uncertainty estimation, we propose a Risk-sensitive Game Impact Metric (RiGIM) that measures players' performance over a season by conditioning on a specific confidence level. Empirical evaluation, based on over 9M play-by-play ice hockey and soccer events, shows that RiGIM correlates highly with standard success measures and has a consistent risk sensitivity.
Supplementary Material: zip
20 Replies

Loading