Tailoring LLMs for Online Shopping: A Strategy for Multi-Task Learning and Domain-Specific Performance Enhancement

02 Aug 2024 (modified: 05 Aug 2024)KDD 2024 Workshop Amazon KDD Cup SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Recommendation, Large Language Model
Abstract: To address the variety of tasks involved in online shopping, ranging from browsing to making a purchase, there is a necessity for multi-task learning models capable of leveraging shared knowledge across different tasks. Large Language Models (LLMs) have the potential to revolutionize the approach to handling multiple tasks by processing them within a single model and adapting to different prompts. Consequently, Amazon has launched the KDD Cup 2024 Multi-Task Online Shopping Challenge for LLMs competition. In this paper, we present a comprehensive solution that encompasses data processing, model training, in-context learning, acceleration of model inference, and post-processing.Due to the requirements of the competition, we chose the open-source Qwen2-72B as the base model. Our solution has demonstrated remarkable effectiveness in the realm of online shopping. Finally, our team secured 3rd place in the KDD CUP 2024 Task 5 and 4th place in KDD CUP 2024 Task 1 and Task 3.
Submission Number: 6
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview