Keywords: NLP, ReAct, LLM, Reiterate, Early-Stop
TL;DR: Focused ReActan is an enhanced version of the ReAct paradigm that incorporates reiteration and early stop mechanisms.
Abstract: Large language models (LLMs) have significantly improved their reasoning and decision-making capabilities, as seen in methods like ReAct. However, despite its effectiveness in tackling complex tasks, ReAct faces two main challenges: losing focus on the original question and becoming stuck in action loops. To address these issues, we introduce Focused ReAct, an enhanced version of the ReAct paradigm that incorporates reiteration and early stop mechanisms. These improvements help the model stay focused on the original query and avoid repetitive behaviors. Experimental results show accuracy gains of 18% to 530% and a runtime reduction of up to 34\% compared to the original ReAct method.
Submission Number: 72
Loading