Keywords: Machine Learning Theory, Black-Box Optimization, Non-vacuous Generalization Bounds, Privacy, Fair Use
TL;DR: We introduce black-box optimization as a method for LLM post-training and prove strong bounds on generalization, privacy, fair use and data poisoning; experiments on LLMs demonstrate performance
Abstract: Gradient-based optimization is the workhorse of deep learning, offering efficient and scalable training via backpropagation. However, exposing gradients during training can leak sensitive information about the underlying data, raising privacy and security concerns such as susceptibility to data poisoning attacks. In contrast, black box optimization methods, which treat the model as an opaque function, relying solely on function evaluations to guide optimization, offer a promising alternative in scenarios where data access is restricted, adversarial risks are high, or overfitting is a concern. This paper introduces BBoxER, an evolutionary black-box method for LLM post-training that induces an information bottleneck via implicit compression of the training data. Leveraging the tractability of information flow, we provide non-vacuous generalization bounds, and strong theoretical guarantees for differential privacy, robustness to data poisoning attacks and extraction attacks. In experiments with LLMs, we demonstrate empirically that black-box optimization methods—despite the scalability and computational challenges inherent to black-box approaches—are able to learn, showing how a few iterations of BBoxER improve performance, generalize well on a benchmark of reasoning datasets, and are robust to membership inference attacks. This positions BBoxER as an attractive add-on on top of gradient-based optimization, offering suitability for deployment in restricted or privacy-sensitive environments while also providing non-vacuous generalization guarantees.
Primary Area: learning theory
Submission Number: 17765
Loading