Abstract: To address the limitations of flexibility or efficiency of existing prompting paradigms in generating intermediate reasoning steps, this paper proposes an reasoning framework LLM-AS, which innovatively combines the A* search algorithm with the reasoning process of large language models(LLMs). LLM-AS utilizes the efficient exploration capability of the A* algorithm and avoids the redundant exploration of high-cost nodes, which significantly improves the search efficiency and reduces the cost of invoking LLM. Meanwhile, through the self-improve mechanism of LLMs, LLM-AS ensures the quality of the generated solutions while minimizing model interactions. In addition, the flexibility of the A* search algorithm enables LLM-AS to be applicable to diverse thought organization structures, providing more possibilities for handling various tasks. We conducted experiments on two complex tasks, game 24 and 8 puzzle, to compare the accuracy of the existing prompting paradigms and LLM-AS on both gpt-3.5-turbo and gpt-4.0. The experimental results show that LLM-AS effectively improves the ability of LLMs to solve complex tasks.
Loading