- Abstract: In this paper, we propose a recurrent neural network architecture for early sequence classification, when the model is required to output a label as soon as possible with negligible decline in accuracy. Our model is capable of learning how many sequence tokens it needs to observe in order to make a prediction; moreover, the number of steps required differs for each sequence. Experiments on sequential MNIST show that the proposed architecture focuses on different sequence parts during inference, which correspond to contours of the handwritten digits. We also demonstrate the improvement in the prediction quality with a simultaneous reduction in the prefix size used, the extent of which depends on the distribution of distinct class features over time.
- Keywords: Recurrent neural networks, Adaptive computational time, Early sequence classification
- TL;DR: We propose a recurrent model for early sequence classification based on the idea of Adaptive computational time.