SHARP-Distill: A 68× Faster Recommender System with Hypergraph Neural Networks and Language Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: SHARP-Distill: Knowledge Distillation with HGNNs and Language Models for Fast Recommendations
Abstract: This paper proposes SHARP-Distill (\textbf{S}peedy \textbf{H}ypergraph \textbf{A}nd \textbf{R}eview-based \textbf{P}ersonalised \textbf{Distill}ation), a novel knowledge distillation approach based on the teacher-student framework that combines Hypergraph Neural Networks (HGNNs) with language models to enhance recommendation quality while significantly improving inference time. The teacher model leverages HGNNs to generate user and item embeddings from interaction data, capturing high-order and group relationships, and employing a pre-trained language model to extract rich semantic features from textual reviews. We utilize a contrastive learning mechanism to ensure structural consistency between various representations. The student includes a shallow and lightweight GCN called CompactGCN designed to inherit high-order relationships while reducing computational complexity. Extensive experiments on real-world datasets demonstrate that SHARP-Distill achieves up to 68× faster inference time compared to HGNN and 40× faster than LightGCN while maintaining competitive recommendation accuracy.
Lay Summary: Online platforms like Amazon, Netflix, and Spotify use recommendation systems to suggest products, movies, or music you might like. These systems work by analyzing your past behavior and finding patterns with other users who have similar tastes. However, the most accurate recommendation systems are very slow and expensive to run, making them impractical for real-time use when millions of people are browsing simultaneously. We developed SHARP-Distill, a new approach that makes recommendation systems 68 times faster while maintaining nearly the same accuracy. Our method works like a master-apprentice relationship: we first train a highly accurate but slow "teacher" system that combines two types of information - how users interact with items (like purchases and ratings) and what they write in reviews. Then, we create a much simpler "student" system that learns from the teacher to make the same quality recommendations but much faster. The key innovation is teaching the simple system not just the final answers, but also the reasoning process the complex system uses. This is like teaching someone to solve math problems by showing them both the answers and the step-by-step thinking, rather than just memorizing solutions. This breakthrough makes high-quality personalized recommendations accessible to smaller companies that couldn't afford expensive computer systems before, and reduces energy consumption for large platforms. The faster system can provide instant recommendations even during peak usage times, improving user experience while being more environmentally friendly.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/S-Forouzandeh/SHARP-Distill/tree/main
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Hypergraph Learning, Knowledge Distillation, Contrastive Learning, Recommendation Systems
Submission Number: 8126
Loading