Abstract: Recommendation systems are essential for filtering data and re
trieving relevant information across various applications. Recent
advancements have seen these systems incorporate increasingly
large embedding tables, scaling up to tens of terabytes for industrial
use. However, the expansion of network parameters in traditional
recommendation models has plateaued at tens of millions, limiting
further benefits from increased embedding parameters. Inspired by
the success of large language models (LLMs), a new approach has
emerged that scales network parameters using innovative struc
tures, enabling continued performance improvements. A signifi
cant development in this area is Meta’s generative recommendation
model HSTU, which illustrates the scaling laws of recommendation
systems by expanding parameters to thousands of billions. This
newparadigmhasachievedsubstantial performance gains in online
experiments. In this paper, we aim to enhance the understanding
of scaling laws by conducting comprehensive evaluations of large
recommendation models. Firstly, we investigate the scaling laws
across different backbone architectures of the large recommenda
tion models. Secondly, we conduct comprehensive ablation studies
to explore the origins of these scaling laws. We then further assess
the performance of HSTU, as the representative of large recom
mendation models, on complex user behavior modeling tasks to
evaluate its applicability. Notably, we also analyze its effectiveness
in ranking tasks for the first time. Finally, we offer insights into
future directions for large recommendation models. Supplemen
tary materials for our research are available on GitHub at https:
//github.com/USTC-StarTeam/Large-Recommendation-Models.
Loading