Demonstrations in In-context Learning for LLMs with Large Label Space

Published: 18 Jun 2024, Last Modified: 16 Jul 2024LCFM 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: In-context learning; Large Label Space; LLM
Abstract: In-context learning (ICL) can solve new tasks on pre-trained Large Language Models (LLMs) given a few demonstrations as input. However, so far there is little understanding of how many demonstrations are required for the real-world scenario, e.g., large-label-space classification. In this work, we conduct a meticulous study under various settings with different LLMs among datasets. Our insights suggest that no demonstrations might be required, especially when the class names are descriptive and the model is strong-performing (e.g., GPT-4). Nevertheless, datasets with extremely large label space can benefit with additional human-created demonstrations, while automatically generated ones might not yield additional benefits.
Submission Number: 9
Loading