Learning and Leveraging Verifiers to Improve Planning Capabilities of Pre-trained Language Models

Published: 09 Jun 2023, Last Modified: 18 Aug 2023PRL 2023 IJCAI Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Planning, Large Language Models, Verifiers
TL;DR: The use of verifiers and diverse sampling is critical to improve the planning capabilities of LLMs.
Abstract: There have been wide spread claims in the literature about the emergent reasoning capabilities of Pretrained Large Language Models. However, recent studies, have found that their ability to plan remains questionable. Through our experiments using GPT-2, we empirically demonstrate that the performance of a finetuned baseline remains poor because it violates pre-conditions of actions in the plans that it generates. To improve the planning capabilities of a finetuned LLM, we train a verifier, which can classify actions as being valid or invalid in a particular state. By randomly sampling actions from the same dataset, we generate examples of invalid actions which are then used to train a verifier which can check for action applicability. In the presence of diverse sampling from a generator and a verifier which can prune invalid trajectories, we show significant gains in the success rate on the Blocksworld domain. Additionally, we show that finetuning the GPT-2 generator itself to create the verifier generalizes better than finetuning the base GPT-2. Lastly, we investigate the role of the sampling temperature which can be used to control the exploration-exploitation tradeoff.
Submission Number: 13
Loading