Distilled Self-Critique of LLMs with Synthetic Data: a Bayesian Perspective

Published: 19 Mar 2024, Last Modified: 27 Apr 2024Tiny Papers @ ICLR 2024 NotableEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RLAIF, distillation, language models, Gibbs sampling
TL;DR: We model the RLAIF process as Bayesian inference to propose distilled Self-Critique
Abstract: This paper proposes an interpretation of RLAIF as Bayesian inference by introducing distilled Self-Critique (dSC), which refines the outputs of a LLM through a Gibbs sampler that is later distilled into a fine-tuned model. Only requiring synthetic data, dSC is exercised in experiments regarding safety, sentiment, and privacy control, showing it can be a viable and cheap alternative to align LLMs. Code released at https://github.com/vicgalle/distilled-self-critique.
Submission Number: 27
Loading