Enhancing User Behavior Alignment by Input-Level Model Cooperation and Model-Level Parameter Optimization

25 Jul 2024 (modified: 05 Aug 2024)KDD 2024 Workshop Amazon KDD Cup SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: User Behavior Alignment, Large Language Model, Model Cooperation, Supervised Fine-Tuning
TL;DR: Recommending products on e-commerce platforms is complex due to evolving user preferences. This paper proposes the MCPO framework to enhance understanding by improving LLMs through input-level model cooperation and model-level parameter optimization.
Abstract: In this paper, we investigate how to improve the large language model (LLM) in the user behavior alignment task, which is constrained by input confusion and process uncertainty. We propose a novel framework that employs input-level model cooperation and model-level parameter optimization. Specifically, in input-level model cooperation, we use the small language models to provide supplementary information to the LLM from both chain-of-thought and semantic similarity perspectives. In model-level parameter optimization, we first use data selection methods to train different models and then hybridize them to obtain the best one. The proposed framework was verified in the KDD Cup 2024 and achieved rank-2 performance, with code open-sourced at here.
Submission Number: 1
Loading