The Art of Knowing When to Stop: Analysis of Optimal Stopping in People and Machines

Published: 10 Oct 2024, Last Modified: 31 Oct 2024MATH-AI 24EveryoneRevisionsBibTeXCC BY 4.0
Keywords: optimal stopping problem, Bayesian inference, Large Language Model, human and machine comparison
TL;DR: Human and Bayesian learners converge to optimal or near-optimal decisions in a combinatorial stopping problem, but GPT agents fail to improve.
Abstract: In combinatorial innovation, people face the decision problem of when to invest in new development, and when to stick with the currently best option. Zhao, Vélez, and Griffiths (2024) showed that under finite horizon, this equates to an optimal stopping problem, and provided analytical solutions. Interestingly, in behavioral experiments, while people's decisions aligned with the rational solutions overall, there were also systematic deviations. Here, we examine two heuristic models to this optimal stopping problem in combinatorial innovation. Our approach assumes that agents make decisions by running mental simulations that integrate prior beliefs and past observations. We show that these models well-capture various patterns in empirical data, suggesting that people may rely on simple heuristics to make fast decisions when solving computational problems involving sophisticated combinatorics. We also investigate whether Large Language Models (LLMs) can be used as a cognitive model to study these processes, report preliminary findings of LLM's limitation in this task, but suggest that chain of thought prompting may help mitigate these limitations.
Concurrent Submissions: N/A
Submission Number: 58
Loading