ADO: Automatic Data Optimization for Inputs in LLM Prompts

ACL ARR 2025 February Submission5195 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

This study explores a novel approach to enhance the performance of Large Language Models (LLMs) through the optimization of input data within prompts. While previous research has primarily focused on refining instruction components and augmenting input data with in-context examples, our work investigates the potential benefits of optimizing the input data itself. We introduce a twopronged strategy for input data optimization: content engineering and structural reformulation. Content engineering involves imputing missing values, removing irrelevant attributes, and enriching profiles by generating additional information inferred from existing attributes. Subsequent to content engineering, structural reformulation is applied to optimize the presentation of the modified content to LLMs, given their sensitivity to input format. Our findings suggest that these optimizations can significantly improve the performance of LLMs in various tasks, offering a promising avenue for future research in prompt engineering. The source code is available at https:// anonymous.4open.science/r/ADO-6BC5/.

Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: large language model, data optimization,
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 5195
Loading