Abstract: Recent advances in generative capabilities provided by large language models have reshaped technology research and human society’s cognitive abilities, bringing new innovative capacities to artificial intelligence solutions. However, the size of such models has raised several concerns regarding their alignment with hardware-limited resources. This paper presents a comprehensive study on training Portuguese-focused Small Language Models (SLMs). We have developed a unique dataset for training our models and employed full fine-tuning, as well as PEFT approaches for comparative analysis. We used Microsoft’s Phi and Google’s Gemma as base models to create our own, named PhiBode and GemBode. These models range from approximately 1 billion to 7 billion parameters, with a total of ten models developed. Our findings provide valuable insights into the performance and applicability of these models, contributing significantly to the field of Portuguese language processing. This research is a step forward in understanding and improving the performance of SLMs in Portuguese. The comparative analysis of the models provides a clear benchmark for future research in this area. The results demonstrate the effectiveness of our training methods and the potential of our models for various applications. This paper significantly contributes to language model training, particularly for the Portuguese language.
External IDs:dblp:conf/ciarp/GarciaPGMP24
Loading