Keywords: language model, self-reflection, self-improvement, narrative
TL;DR: We efficiently perform self-reflection on tree structures for narrative
Abstract: While most language is formatted linearly, applications such as planning, trees of thought, and branching narrative are represented in a tree structure. Generating branching outputs from a language model (LM) is trivial, but representing trees of text in a one dimensional input is problematic. This makes popular self-reflection methods of improvement prohibitive for branching language. In this work, we address this limitation by proposing a new method for improving trees of branching language. Our method iterates between reflecting on sampled paths through a tree and resampling problematic subtrees. We evaluate our method on a branching narrative task with the objective of improving every path through the tree. Our method creates narrative that is preferred 60% more than unmodified narrative trees by an LM judge. Our method also scales to tree depths that cause naive methods of self-reflection to fail.
Submission Number: 9
Loading