FedACL: A Collaborative Federated Fine-Tuning Framework for Large Language Models With AWLoRA and Contrastive Learning
Abstract: Federated learning (FL) is a powerful framework that enables collaborative learning across decentralized data sources, addressing privacy concerns in sensitive domains. However, applying large pretrained models in federated environments poses challenges, including data heterogeneity and computational inefficiency. In this article, we propose FedACL, a novel federated fine-tuning framework, designed to enhance the efficiency of large language models in federated settings. FedACL integrates two key modules: 1) attention-aware low rank adaptation (AWLoRA), which reduces the number of parameters that need fine-tuning; and 2) model contrastive learning (MCL) specifically tailored for pretrained large-scale models, which improves the model’s robustness and accelerates convergence. Our approach significantly reduces computational costs and communication overhead while maintaining privacy, making it highly suitable for real-world applications. Extensive experiments on datasets such as CIFAR-10, MNIST, AG-News, CIFAR-100, and LEDGAR demonstrate that FedACL outperforms existing federated fine-tuning methods in terms of computational efficiency, accuracy, and robustness, offering promising scalability and adaptability for future intelligent applications.
External IDs:doi:10.1109/tcss.2025.3628642
Loading