Learning to Rewrite Prompt for Boostrapping LLMs on Downstream TasksDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: As versatile agents, large-scale language models (LLMs) have demonstrated impressive performance across various domains. Their capabilities in language-based pattern recognition and machine learning have garnered significant attention and have been applied to numerous tasks with remarkable success. However, different LLMs still rely on specific instruction prompts, and the design of prompt tokens is still heavily dependent on manual design, which hinders the widespread application of LLMs. In response to this challenge, we propose a concise and effective input optimization method, which consists of two modules: original input rewriting and filtering. Inspired by the concept of collaboration between large and small models, we introduce a rewriting module between input prompts and LLMs inference. This module rewrites the input component based on the preferences of the LLMs for the data. The filtering module performs the quality inspection on the rewritten data and filters out invalid and hallucinatory data. Experimental results on language pattern recognition tasks verify that our rewriting and filtering method effectively transforms ambiguous data into more precise input prompts. In comparison to the original inputs, performance improvement is consistently observed across various tasks.
Paper Type: long
Research Area: Summarization
Languages Studied: english
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview