Hierarchical Collaborative Fine-Tuning for Personalized Edge-End Hybrid Inference of Stable Diffusion Models

Published: 26 Jan 2026, Last Modified: 26 Jan 2026AAAI 2026 Workshop on ML4Wireless PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AIGC, Edge AI, Generative Model Fine-Tuning, Stable Diffusion Model, Federated Learning, Low-Rank Adaptation, Hybrid-style Image Generation, Multi-user Personalization
TL;DR: We introduce an edge-based collaborative fine-tuning framework that leverages federated learning and Low-Rank Adaptation adapters for efficient, privacy-preserving personalization of diffusion models.
Abstract: Diffusion models (DMs) have emerged as powerful Artificial Intelligence Generated Content (AIGC) tools for high-quality image synthesis. However, achieving effective edge personalization faces challenges: heterogeneous user preferences, limited local data, and intensive computational demands on resource-constrained devices. To bridge this gap, we first highlight the limitations of existing works in communication efficiency and scalability, and then introduce an edge-assisted collaborative fine-tuning framework, built upon Low-Rank Adaptation (LoRA) for parameter-efficient local tuning. Within a federated learning (FL) framework, we jointly train user-specific models on edge devices and a global model on the server, enabling collaborative personalization while preserving data privacy. The shared global model is enriched with multiple LoRA adapters and can be employed in a hybrid inference process to enhance communication efficiency. To mitigate feature distribution shifts caused by style diversity, the server performs hierarchical client clustering with intra-cluster aggregation for enhanced personalization and inter-cluster interaction for cross-style alignment. Beyond improving inference efficiency, our framework also addresses privacy concerns: transmitting prompts that contain style or label information to a semi-trusted server could inadvertently expose user data. To mitigate this, we derive embeddings from user-specified keywords, reducing the risk of revealing sensitive dataset details. Evaluations show that our framework achieves accelerated convergence and scalable multi-user personalization, making it a practical solution for edge-constrained AIGC services.
Submission Number: 17
Loading