Keywords: Recommender Systems, Large Language Models, Multi-Task Recommendations
Abstract: This paper describes the solution from the AML\_Lab@CityU team, who achieved second place in track 1 and track 5 (i.e., the overall track) and third place in track 2, track 3, and track 4 in Amazon KDD Cup 2024. We aim to construct a Large Language Model-based framework to answer diverse and complex online shopping questions with the multi-task and few-shot learning abilities of LLMs. This competition is challenging because of no training data provided, multi-task and complex online shopping questions, and sharp limitations on inference time and GPU memory. To tackle the above challenges, we introduce a pipeline containing three parts: base model selection, pre-trained model quantization, and prompt design. Our solution across all five tracks adheres to these three steps and demonstrates robust performance. It is worth noting that there is no fine-tuning in our solution, which broadens the usability of our pipeline. Our code is released online for ease of reproduction.
Submission Number: 3
Loading