PTT5: Pretraining and validating the T5 model on Brazilian Portuguese data
Abstract: In natural language processing (NLP), there is a need for more resources in Portuguese,
since much of the data used in the state-of-the-art research is in other languages. In this
paper, we pretrain a T5 model on the BrWac corpus, an extensive collection of web pages
in Portuguese, and evaluate its performance against other Portuguese pretrained models and
multilingual models on three different tasks. We show that our Portuguese pretrained models
have significantly better performance over the original T5 models. Moreover, we demonstrate
the positive impact of using a Portuguese vocabulary. Our code and models are available at
https://github.com/unicamp-dl/PTT5.
0 Replies
Loading