Abstract: Aligning large-scale commercial models with user intent is crucial to preventing harmful outputs. Current methods rely on human supervision but become impractical as model complexity increases. When models surpass human knowledge, providing accurate feedback becomes challenging and inefficient.
A novel solution proposed recently is using a weaker model to supervise a stronger model. This concept leverages the ability of weaker models to perform evaluations, thereby reducing the workload on human supervisors.
Previous work has shown the effectiveness of weak-to-strong generalization in the context of language-only models. Extending this concept to vision-language models leverages these insights, adapting the proven benefits to a multi-modal context.
In our study, we explore weak-to-strong generalization for CLIP-based classification. We propose a method, \emph{class prototype learning} (CPL), which aims to enhance the classification capabilities of the CLIP model, by learning more representative prototypes for each category.
Our findings indicate that, despite using a simple loss function under weak supervision, CPL yields robust improvements in targeted scenarios, particularly when pretraining is limited. Extensive experiments demonstrate that our approach is effective under these settings, achieving a 3.67\% improvement over strong baseline methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We added experiments during the rebuttal and have reworded some texts.
Assigned Action Editor: ~Weijian_Deng1
Submission Number: 4189
Loading