Does GPT-4 have good intuition about functions?

17 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Large Language Models, Function modelling, Evaluation
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We evaluate the function modeling capabilities of GPT-4
Abstract: Humans inherently possess the intuition to model real-world functions such as predicting the trajectory of a ball at an intuitive level. Do Large Language Models (LLMs), trained on extensive web data comprising of human-generated knowledge, exhibit similar capabilities? This research pivots on probing such ability of LLMs (in particular, \textit{GPT-4}) to mimic human-like intuition in comprehending various types of functions. Our evaluation reveals the potent abilities of GPT-4 not just to discern various patterns in data, but also to harness domain knowledge for function modeling at an intuitive level, all without the necessity of gradient-based learning. In circumstances where data is scarce or domain knowledge takes precedence, GPT-4 manages to exceed the performance of traditional machine learning models. Our findings underscore the remarkable potential of LLMs for data science applications while also underlining areas for improvement.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 979
Loading