Abstract: Since the advent of BERT, Transformer-based language models (TLMs) have shown outstanding effectiveness in several NLP tasks. In this paper, we aim at bringing order to the landscape of TLMs and their performance on important benchmarks for NLP. Our analysis sheds light on the advantages that some TLMs take over the others, but also unveils issues in making a complete and fair comparison in some situations.
0 Replies
Loading