Snippet-based Conversational Recommender System

ACL ARR 2025 July Submission1100 Authors

29 Jul 2025 (modified: 20 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Conversational Recommender Systems engage users in interactive dialogues to gather preferences and provide personalized recommendations. While existing studies have advanced conversational strategies, they often rely on predefined attributes or expensive, domain-specific annotated datasets, which limits their flexibility in handling diverse user preferences and adaptability across domains. We propose SnipRec, a novel resource-efficient approach that leverages user-generated content, such as customer reviews, to capture a broader range of user expressions. By employing large language models to map reviews and user responses into concise snippets, SnipRec represents user preferences and retrieves relevant items without the need for intensive manual data collection or fine-tuning. Experiments across the restaurant, book, and clothing domains show that snippet-based representations outperform document- and sentence-based representations, achieving Hits@10 of 0.25-0.55 with 3,000 to 10,000 candidate items while successfully handling free-form user responses.
Paper Type: Long
Research Area: Information Retrieval and Text Mining
Research Area Keywords: conversational recommender system, retrieval, dense retrieval, document representation, re-ranking, applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Previous URL: https://openreview.net/forum?id=2E5CfVzdId
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: Yes, I want a different area chair for our submission
Reassignment Request Reviewers: Yes, I want a different set of reviewers
Justification For Not Keeping Action Editor Or Reviewers: We observed several issues, such as a lack of specificity in the feedback (I1) and limited engagement during the rebuttal period (I11). Furthermore, the discussions across cycles were not well connected, as reviewers continued to raise new concerns over time, without following the ARR's guidelines. Moreover, the area chair did not appear to take any action to guide the reviewers.
Software: zip
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: Ethical Considerations
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Section 4.1 and Appendix A.3
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: Appendix A.3
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: Appendix A.3
B4 Data Contains Personally Identifying Info Or Offensive Content: Yes
B4 Elaboration: User IDs were already anonymized by the dataset creator (Yelp/Amazon). Individual names were further masked in the course of data creation (Section 4.3 and Appendix A.3)
B5 Documentation Of Artifacts: Yes
B5 Elaboration: Section 4.1 and Appendix A.3
B6 Statistics For Data: Yes
B6 Elaboration: Section 4.1
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Section 4.1 and Appendix B
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Section 4.1 and Appendix B
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 4.3
C4 Parameters For Packages: Yes
C4 Elaboration: Section 4.1 (including footnotes)
D Human Subjects Including Annotators: Yes
D1 Instructions Given To Participants: Yes
D1 Elaboration: Appendix A
D2 Recruitment And Payment: No
D2 Elaboration: Manual annotation was performed by the paper authors.
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: Yes
D5 Elaboration: Appendix A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: Yes
E1 Elaboration: Appendix A.3
Author Submission Checklist: yes
Submission Number: 1100
Loading