TL;DR: We create a novel dataset of creative problem solving tasks. We then compare/contrast seven LLMs with human, showing a substantial gap but complementary capabilities. We also enhanced LLMs' problem-solving ability with novel prompting techniques.
Abstract: We explore the creative problem-solving capabilities of modern LLMs in a constrained setting. To this end, we create MacGyver, an automatically generated dataset consisting of 1,600 real-world problems deliberately designed to trigger innovative usage of objects and necessitate out-of-the-box thinking. We then present our collection to both LLMs and humans to compare and contrast their problem-solving abilities. Our task is challenging for both groups, but in unique and complementary ways. For instance, humans excel in tasks they are familiar with but struggle with domain-specific knowledge, leading to a higher variance. In contrast, LLMs, exposed to a variety of specialized knowledge, attempt broader problems but fail by proposing physically-infeasible actions. Finally, we provide a detailed error analysis of LLMs, and demonstrate the potential of enhancing their problem-solving ability with novel prompting techniques such as iterative step-wise reflection and divergent-convergent thinking.
This work introduces a fresh arena for intelligent agents focusing on intricate aspects of physical reasoning, planning, and creativity, and provides insight into the constrained problem-solving capabilities of both humans and AI.
Paper Type: long
Research Area: Resources and Evaluation
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English
0 Replies
Loading