Abstract: Improving the effectiveness and efficiency of large language models (LLMs) simultaneously is a critical yet challenging research goal.
In this paper, we find that low-rank pre-training, normall considered as efficient methods that will compromise performance, can be scalably effective when reduced parameters are precisely targeted.
Specifically, by applying low-dimensional module only to the attention layer -- resolves this issue and enhances both effectiveness and efficiency.
We refer to this structure as \textsl{Low-dimensional Projected Attention (LPA)} and provide an explanatory analysis.
Through extensive experimentation at parameter scales of 130M, 370M, and scaling up to 3B, we have validated the effectiveness and scalability of LPA. Our results show that LPA model can save up to 12.4\% in time while achieving an approximate 5\% improvement in test perplexity (ppl) and on downstream tasks compared with vanilla Transformer.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Language Modeling, Efficient/Low-Resource Methods for NLP
Languages Studied: English
Submission Number: 1364
Loading