Keywords: Feed-Forward Networks, Model Architecture, Knowledge Representation, Pre-training
TL;DR: FFNs in 70% of the consecutive middle layers of Transformer-based LM contribute more to model performance than other layers.
Abstract: This study investigates the layerwise importance of feed-forward networks (FFNs) in transformer-based language models during pretraining.
We introduce an experimental approach that, while maintaining the total parameter count, increases the FFN dimensions in some layers and completely removes the FFNs from other layers.
Furthermore, since our focus is on the importance of FFNs during pretraining, we train models from scratch to examine whether the importance of FFNs varies depending on their layer positions, rather than using publicly available pretrained models as is frequently done.
Through comprehensive evaluations of models with varying sizes (285M, 570M, and 1.2B parameters) and layer counts (12, 24, and 40 layers), we demonstrate that concentrating FFNs in 70\% of the consecutive middle layers consistently outperforms standard configurations for multiple downstream tasks.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 1572
Loading