Uses and Abuses of the Cross-Entropy Loss: Case Studies in Modern Deep LearningDownload PDF

Published: 09 Dec 2020, Last Modified: 22 Oct 2023ICBINB 2020 OralReaders: Everyone
Keywords: label smoothing, actor-mimic RL, continuous-categorical distribution
TL;DR: We study label smoothing and actor-mimic RL under a probabilistic lens, using the novel continuous-categorical distribution.
Abstract: Modern deep learning is primarily an experimental science, in which empirical advances occasionally come at the expense of probabilistic rigor. Here we focus on one such example; namely the use of the categorical cross-entropy loss to model data that is not strictly categorical, but rather takes values on the simplex. This practice is standard in neural network architectures with label smoothing and actor-mimic reinforcement learning, amongst others. Drawing on the recently discovered continuous-categorical distribution, we propose probabilistically-inspired alternatives to these models, providing an approach that is more principled and theoretically appealing. Through careful experimentation, including an ablation study, we identify the potential for outperformance in these models, thereby highlighting the importance of a proper probabilistic treatment, as well as illustrating some of the failure modes thereof.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2011.05231/code)
1 Reply

Loading