Context Matters: Enriching NLP Models with GPT-Generated Insights

ACL ARR 2025 February Submission3537 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) excel in NLP tasks but are highly sensitive to input design. This study examines the impact of context augmentation as a way of fine-tuning NLP models for adverse drug event (ADE) detection from social media text. We evaluate on the sequence and token classification tasks using different input regimes, including appended context and span highlighting. Our results show that the appended context consistently improves performance, increasing F1 scores by 2--4 points. However, added context shifts the precision-recall balance, boosting recall at the cost of precision. These findings highlight the potential of LLM-generated and knowledge-based context for enhancing NLP quality for tasks in data-scarce settings.
Paper Type: Short
Research Area: Information Extraction
Research Area Keywords: Efficient/Low-Resource Methods for NLP, Generation, Information Extraction, Language Modeling, Machine Learning for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 3537
Loading