Keywords: Model Reprogramming, Black-box Visual Reprogramming
Abstract: Black-box model reprogramming (BMR) aims to re-purpose black-box pre-trained models (i.e., APIs) for target tasks by learning input patterns (e.g., using Zeroth-Order Optimization (ZOO)) that transform model outputs to match target labels. However, ZOO-based BMR is *inefficient*, requiring *extensive API calls* (could be expensive) and suffering from unstable optimization. More critically, we find this paradigm is becoming ineffective on modern, real-world APIs (e.g., GPT-4o), which can ignore the input perturbations ZOO relies on, leading to negligible performance gains. To address these limitations, we propose **PoRL** (Prime Once, then Reprogram Locally), an alternative strategy that shifts the adaptation task to an amenable local model. PoRL initiates a one-time priming step to transfer knowledge from the service API to a local pre-trained encoder. This single, efficient interaction is then followed by a highly effective white-box model reprogramming directly on the local model. Consequently, all subsequent adaptation and inference rely solely on this local model, *eliminating* further API costs. Experiments demonstrate PoRL's effectiveness where prior methods fail: on GPT-4o, PoRL achieves a +27.8\% gain over the zero-shot baseline, a task where ZOO provides no improvement. Broadly, across ten diverse datasets, PoRL outperforms state-of-the-art methods with an average accuracy gain of +2.5\% for VLMs and +15.6\% for VMs, while reducing API calls by over 99.99\%. PoRL thus offers a robust and highly efficient solution for adapting modern black-box models.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 15915
Loading