everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Learning from human feedback has shown success in aligning large, pretrained models with human values. However, prior works have mostly focused on using high-level labels, such as preferences between pairs of model outputs. On the other hand, many domains could benefit from more involved, detailed feedback, such as corrections, explanations, and reasoning of human users. Our work proposes using nuanced feedback through the form of human revisions for stronger alignment. In this paper, we ask expert designers to fix layouts generated from a generative layout model that is pretrained on a large-scale dataset of mobile screens. Then, we train a reward model based on how human designers revise these generated layouts. With the learned reward model, we optimize our model with reinforcement learning from human feedback (RLHF). Our method, Revision-Aware Reward Models (RARE), allows a generative model to produce more modern, designer-aligned layouts, showing the potential for utilizing human corrections and stronger forms of feedback in improving generative models.