Graph Conditional Variational Models: Too Complex for Multiagent Trajectories?Download PDF

Published: 09 Dec 2020, Last Modified: 05 May 2023ICBINB 2020 SpotlightReaders: Everyone
Keywords: multiagent, multi-agent, trajectories, trajectory, trajectory modeling, density estimation, CVMs, conditional variational models, VRNN, variational RNNs, variational recurrent neural networks, CVAE, conditional VAEs, conditional variational autoencoders, GSNN, MDN, mixture density networks, GMM, Gaussian mixture models, graph networks, GNN, graph neural networks, GRNN, graph RNNs, trajectron, GVRNN, graph VRNNs
TL;DR: In contrast to common belief, results on modeling (multiagent) trajectories with (graph) neural networks are misleading, as well known mixture density networks seem to dominate conditional variational autoencoding approaches in rigorous experiments.
Abstract: Recent advances in modeling multiagent trajectories combine graph architectures such as graph neural networks (GNNs) with conditional variational models (CVMs) such as variational RNNs (VRNNs). Originally, CVMs have been proposed to facilitate learning with multi-modal and structured data and thus seem to perfectly match the requirements of multi-modal multiagent trajectories with their structured output spaces. Empirical results of VRNNs on trajectory data support this assumption. In this paper, we revisit experiments and proposed architectures with additional rigour, ablation runs and baselines. In contrast to common belief, we show that prior results with CVMs on trajectory data might be misleading. Given a neural network with a graph architecture and/or structured output function, variational autoencoding does not seem to contribute statistically significantly to empirical performance. Instead, we show that well-known emission functions do contribute, while coming with less complexity, engineering and computation time.
1 Reply

Loading