MobileCLIP2: Improving Multi-Modal Reinforced Training

TMLR Paper4668 Authors

14 Apr 2025 (modified: 27 Apr 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Foundation image-text models such as CLIP with zero-shot capabilities enable a wide array of applications. MobileCLIP is a recent family of image-text models at 3-15ms latency and 50-150M parameters with state-of-the-art zero-shot accuracy. The main ingredients in MobileCLIP were its low-latency and light architectures and a novel multi-modal reinforced training that made knowledge distillation from multiple caption-generators and CLIP teachers efficient, scalable, and reproducible. In this paper, we improve the multi-modal reinforced training of MobileCLIP through: 1) better CLIP teacher ensembles trained on the DFN dataset, 2) improved captioner teachers trained on the DFN dataset and fine-tuned on a diverse selection of high-quality image-caption datasets. We discover new insights through ablations such as the importance of temperature tuning in contrastive knowledge distillation, the effectiveness of caption-generator fine-tuning for caption diversity, and the additive improvement from combining synthetic captions generated by multiple models. We train a new family of models called MobileCLIP2 and achieve state-of-the-art ImageNet-1k zero-shot accuracies at low latencies. In particular, we observe 2.2% improvement in ImageNet-1k accuracy for MobileCLIP2-B compared with MobileCLIP-B architecture. Notably, MobileCLIP2-XL matches the zero-shot accuracy of SigLIP-SO400M/14 on ImageNet-1k while being 2× smaller and improves on DFN ViT-L/14 at 2.5× lower latency. We will release the data generation code and our pretrained models. The data generation code makes it easy to create new reinforced datasets with arbitrary teachers using distributed scalable processing.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: /forum?id=ZAnyvtYeAL
Changes Since Last Submission:

The font did not match the TMLR style format because of a redundant import of the LaTeX package "times". The font and style now matches recent TMLR accepted submissions.

Assigned Action Editor: Liang-Chieh Chen
Submission Number: 4668
Loading