Learning the Unseen: Peer-to-Peer Fine-tuning of Vision Transformers

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Vision Transformers, Distributed Peer-to-Peer Learning, Transfer Learning, Heterogeneous Data
Abstract: In this paper, we propose a distributed training framework for fine-tuning vision transformers. We address the training process in scenarios where heterogeneous data is geographically distributed across a network of nodes communicating over a peer-to-peer network topology. These nodes have the capability to exchange information with neighboring nodes but do not share their personal training data in order to maintain data privacy. Typically, training the entire vision transformer model is impractical due to computational constraints. Therefore, it is highly preferable to use a pre-trained transformer and fine-tune it for specific downstream tasks as required. In this paper, we propose a privacy-aware distributed fine-tuning method for vision transformer based downstream tasks. We demonstrate that our approach enables distributed models to achieve similar performance results as achieved on a single computational device with access to the entire training dataset. We present numerical experiments for distributed fine-tuning of ViT, DeiT, and Swin-transformer models on various datasets.
Supplementary Material: zip
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6636
Loading