Framework Adapts PLMs towards Target Domain via Correcting Knowledge Bias
Keywords: Pretrained Language Models, Large Language Models, Adapter, Domain shift, Topic Lift
TL;DR: Our goal is to guide pre-trained language models (PLMs) towards the target domain via Correcting Knowledge Bias.
Abstract: Our goal is to guide pre-trained language models (PLMs) towards the target domain.
Since Transformer-based models are pre-trained on larger and more heterogeneous corpora than a specific target corpus,
the domain gap between these corpora and the target corpus raises the question of whether these PLMs will contribute to this task after fine-tuning.
To close this domain gap,
our proposal, Target Dig Adapter (TDA), is a model-agnostic adaptation framework that coordinates the knowledge of PLMs, the source domain, and the target domain.
The novelty of TDA is that it focuses on the differences between global and local knowledge,
and guides PLMs towards the target domain through shifting these differences.
Experiments show that TDA closes this gap,
and guide PLMs to generate texts towards a given small target corpus.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8586
Loading