Abstract: As powerful generative pre-trained language models like GPT become more prevalent, it is imperative to explore methods for customizing these models to suit downstream datasets. Numerous recent studies, exemplified by Chain of Thoughts (CoT), and Tree of Thoughts (ToT), among others, have underscored prompting as the primary approach for harnessing Large Language Models (LLMs) to tackle various tasks. A novel approach called Graph of Thoughts (GoT) has been introduced—framework that enhances the prompting abilities of LLMs—allows for the merging of different thoughts from these models into collaborative results, extracting the core of entire networks of thoughts, or refining thoughts through feedback loops. In our study, we identify a contemporary problem of GoT: the Prompter involves significant human intervention, suggesting that GoT costs more for both using humans as prompt engineers as well as using chatGPT to handle tasks and sometimes a human element is needed to evaluate and score at thought Error Score. To address the problem, we propose Auto Graph of Thoughts (AutoGoT) which extends GoT by allowing LLMs to freely generate prompts for each type of Thought and utilizes those prompts to generate output for each thought. Compared to GoT static prompts, our LLMs’s prompts adapt to multitask without changing the base prompt. Experiments on sorting, intersection, keyword counting, and document merging show that AutoGoT is more cost-effective and has a competitive score compared to GoT without using Error Score thought.
External IDs:doi:10.1145/3674558.3674574
Loading