{GRAM}: {F}ast {F}ine-tuning of {P}re-trained {L}anguage {M}odels for {C}ontent-based {C}ollaborative {F}ilteringDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=mub1gfLHhfK
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Content-based collaborative filtering (CCF) predicts user-item interactions based on both users’ interaction history and items’ content information. Recently, pre-trained language models (PLM) have been used to extract high-quality item encodings for CCF. However, it is resource-intensive to train a PLM-based CCF model in an end-to-end (E2E) manner, since optimization involves back-propagating through every content encoding within a given user interaction sequence. To tackle this issue, we propose GRAM (GRadient Accumulation for Multi-modality in CCF), which exploits the fact that a given item often appears multiple times within a batch of interaction histories. Specifically, Single-step GRAM aggregates each item encoding's gradients for back-propagation, with theoretic equivalence to the standard E2E training. As an extension of Single-step GRAM, we propose Multi-step GRAM, which increases the gradient update latency, achieving a further speedup with drastically less GPU memory. GRAM significantly improves training efficiency (up to 146x) on five datasets from two task domains of Knowledge Tracing and News Recommendation. Our code is available at https://github.com/yoonseok312/GRAM.
Presentation Mode: This paper will be presented in person in Seattle
Copyright Consent Signature (type Name Or NA If Not Transferrable): Yoonseok Yang
Copyright Consent Name And Address: Name: Yoonseok Yang, Address: 521, Teheran-ro, Gangnam-gu, Seoul, Republic of Korea
0 Replies

Loading