Benchmarking General-Purpose In-Context Learning

TMLR Paper2826 Authors

07 Jun 2024 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In-context learning (ICL) empowers generative models to address new tasks effectively and efficiently on the fly, without relying on any artificially crafted optimization techniques. In this paper, we study extending ICL to address a broader range of tasks with an extended learning horizon and higher improvement potential, namely General Purpose In-Context Learning (GPICL). To this end, we introduce two lightweight benchmarks specifically crafted to train and evaluate GPICL functionalities. Each benchmark encompasses a vast number of tasks characterized by significant task variance. These tasks are also crafted to promote long-horizon in-context learning through continuous generation and interaction, covering domains such as language modeling, decision-making, and world modeling. The benchmarks necessitate the models to leverage contexts and history interactions to enhance their capabilities, \rev{which we believe to be the key characteristics of GPICL. We present baseline solutions for the two benchmarks using transformer model and its variants, demonstrating that these benchmarks match the criteria for GPICL by highlighting the importance of long-term in-context dependencies and high potential for in-context improvement. Furthermore, our findings suggest that the scale of the parameter alone may not be the key factor for ICL or GPICL success; instead, greater importance should be given to higher task diversity and longer context lengths.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=wt5aloIx7n&referrer=%5BAuthor%20Console%5D
Changes Since Last Submission: Anonymize the link to the source code.
Assigned Action Editor: ~Ying_Wei1
Submission Number: 2826
Loading