Abstract: Predicting pedestrian’s future behavior in a crowd plays an important role in many fields. Such as autonomous driving, machine navigation, video surveillance, and intelligent security systems. This is very challenging because pedestrian motion can be easily influenced by surrounding pedestrians’ interactions. In previous works, researchers use these interactions to make prediction more effective. However, the previous work set fixed-length input in their models. In this way, they ignore shorter pedestrian trajectories. This approach leads to insufficient feature information and inaccurately prediction in some scenarios. In this paper, we propose an Autoencoder-based model for pedestrian trajectory prediction of variable length (ASTRAL). At first, we use the autoencoder to process pedestrian data with variable-length trajectories. And then, we use the optimized multi-head attention mechanism to extract the interactions between neighbors. Finally, we use LSTM to decode vectors and make predictions. In particular, we fine-tune the model to make its performance better. We test our model and the state-of-the-art methods on the public benchmark datasets. Compared with others, our model improves ADE (average displacement error) and FDE (final displacement error) by \(9\%\) and \(33\%\) respectively. Therefore, our model is better than previous works, and we can predict the future trajectory of pedestrians more effectively.
Loading