Keywords: Paradigm for AI Systems, LLM, In-Context Learning, Mocking
TL;DR: A paradigm for adapting LLMs to general machine learning tasks by instructing LLMs to role-play functions.
Abstract: Large language models (LLMs) are now being used with increasing frequency as chat bots, tasked with the summarizing information or generating text and code in accordance with user instructions.
The rapid increase in reasoning capabilities and inference speed of LLMs has revealed their remarkable potential for applications extending beyond the domain of chat bots.
However, there is a paucity of research exploring the integration of LLMs into a broader range of intelligent software systems.
In this research, we propose a paradigm for leveraging LLMs as mock functions to adapt LLMs to general machine learning tasks.
Furthermore, we present an implementation of this paradigm, entitled the Mockingbird platform.
In this paradigm, users define mock functions which are defined solely by method signature and documentation. Unlike LLM-based code completion tools, this platform does not generate code at compile time; instead, it instructs the LLM to role-play these mock functions at runtime.
Based on the feedback from users or error from software systems, this platform will instruct the LLM to conduct chains of thoughts to reflect on its previous output, thereby enabling it to perform reinforcement learning.
This paradigm fully exploits the intrinsic knowledge and in-context learning ability of LLMs.
In comparison to conventional machine learning methods, following distinctive advantages are offered:
(a) Its intrinsic knowledge enables it to perform well in a wide range of zero-shot scenarios.
(b) Its flexibility allows it to adapt to random increases or decreases of data fields.
(c) It can utilize tools and extract information from sources that are inaccessible to conventional machine learning methods, such as the Internet.
Finally, we evaluated its performance and demonstrated the previously mentioned benefits using several datasets from Kaggle. Our results indicate that this paradigm is highly competitive.
Supplementary Material: zip
Primary Area: infrastructure, software libraries, hardware, systems, etc.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8645
Loading