Abstract: Recently, sequence prediction techniques using Transformers have become essential in various fields. However, so far Transformers have only focused on predicting the next elements in a sequence and do not predict their occurrence times. Therefore, in this paper, we propose an extension of Transformer to predict not only the next elements but also their occurrence times. For this purpose, we extend Transformer in three ways: (1) We propose a new positional encoding method that can reflect both the order and occurrence time of each element in a sequence, (2) We extend the output layer of Transformer to simultaneously predict the next element and its occurrence time, and (3) We refine the loss function to measure the difference between sequences considering both the order and occurrence times of elements. Through experiments using real datasets, we confirmed that the proposed model more accurately predicts the order and occurrence time of each element than the existing Transformer.
Loading