Abstract: Language Modeling is at the core of many natural language processing tasks. We analyze two such recent models: a Gated Convolutional Network (GCN) with five layers on the Wikitext-2 dataset and a Transformer network with 24 layers on the Google Billion Word dataset. We find that when executed on modern graphics processors, 30% - 40% of the execution time is due to the final adaptive softmax layer. Analytical modeling of the computation and memory demands of the GCN shows that this behavior will persist even if the hidden state is increased - which could be needed to improve accuracy or to support a wider vocabulary. We present variations of the adaptive softmax layer that reduce execution time for the layer by 40% and that scale better with the hidden state.
0 Replies
Loading