Distilling Examples into Task Instructions: Enhanced In-Context Learning for Real-World B2B Conversations

ACL ARR 2026 January Submission6193 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Efficiency, few-shot learning, human-in-the-loop, business NLP, NLP datasets
Abstract: In-context learning (ICL) is the standard method for low-data classification, yet its efficacy in specialized, intricate domains remains largely unexplored. We address the challenge of classifying semantically complex, multi-party B2B conversations, where traditional ICL encounters significant limitations, especially as context length increases due to concatenation of multiple few-shot examples. We introduce the Call Playbook dataset, featuring five classification tasks derived from real-world B2B conversations targeting core sales concepts. To bridge the gap between performance and practical utility, we propose novel knowledge extraction methods that distill verbose examples into compact, interpretable representations of structured classification criteria and precise task descriptions. Our approach achieves a 99% reduction in token usage and improves macro-averaged AUC by up to 7% over traditional ICL. Notably, our method remains robust as context grows, unlike advanced token compression baselines which degrade by over 9 points. Our interpretable artifacts facilitate seamless refinement, allowing users to directly modify classification logic. This approach addresses critical needs for transparency, efficiency, and user interaction in real-world NLP applications.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: LLM Efficiency,few-shot learning,human-in-the-loop,business NLP,NLP datasets
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Data resources
Languages Studied: English
Submission Number: 6193
Loading