Generating Data Augmentation Queries Using Large Language Models
Keywords: Information Integration, Data Integration, large language models, Heterogeneous DBMS, Federated DBMS, Applied ML and AI for data management
TL;DR: Augmenting local entities with external information extracted online via progressively learning query formulation strategies using a pretrained LLM..
Abstract: Users often want to augment entities in their datasets with relevant information from external data sources. As many external sources are accessible only via keyword-search interfaces, a user usually has to manually formulate a keyword query that extracts relevant information for each entity. This is challenging as many data sources contain numerous tuples, only a small fraction of which may be relevant. Moreover, different datasets may represent the same information in distinct forms and under different terms. In such cases, it is difficult to formulate a query that precisely retrieves information relevant to a specific entity. Current methods for information enrichment mainly rely on resource-intensive manual effort to formulate queries to discover relevant information. However, it is often important for users to get initial answers quickly and without substantial investment in resources (such as human attention). We propose a progressive approach to discovering entity-relevant information from external sources with minimal expert intervention. It leverages end users’ feedback to progressively learn how to retrieve information relevant to each entity in a dataset from external data sources. To bootstrap performance, we use a pre-trained large language model (LLM) to produce rich representations of entities. We evaluate the use of parameter efficient techniques for aligning the LLM’s representations with our downstream task of online query policy learning and find that even lightweight fine-tuning methods can effectively adapt encodings to domain-specific data.
Submission Number: 40