Self-Teaching Prompting for Multi-Intent Learning with Limited Supervision

Published: 19 Mar 2024, Last Modified: 01 Jun 2024Tiny Papers @ ICLR 2024 PresentEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language model, Prompting, Multi-intents learning with limited supervision
Abstract: Multi-intent learning with limited supervision involves predicting multiple intentions of utterances using only a few annotated samples. The primary motivation for this task stems from the high costs and cumbersome processes associated with annotating large datasets. To mitigate this, we propose utilising Large Language Models (LLMs) for annotation assistance. Although LLMs show promise, they struggle with response randomness, and their previous prompts is static and do not learn from their outputs. To address this, we propose `self-teaching prompting' (STP), a method that enables Large Language Models (LLMs) to iteratively learn from their consistent samples and refine their predictions over time. Our experiments with multi-intention datasets demonstrate that STP significantly enhances response accuracy.
Submission Number: 193