Abstract: In recent years, leveraging pre-trained language models (PLMs) to generate embeddings for downstream tasks has achieved remarkable success. To further enhance the adaptability of PLMs to downstream tasks, the prevailing strategy involves incorporating auxiliary tasks as regularization terms, (e.g., contrastive learning), for fine-tuning the pre-trained models. However, this approach encounters challenges due to task conflicts between auxiliary tasks and specific downstream tasks. To overcome these issues, we introduce a novel strategy termed Consistency Adversarial Training (CAT). CAT first dynamically identifies the most inconsistent cases between specific and auxiliary tasks by introducing perturbations and then eliminates the inconsistency in an adversarial learning manner. Performance evaluations on the GLUE benchmark demonstrate that CAT minimizes task conflicts during the fine-tuning process of PLMs, leading to a notable improvement in the overall fine-tuning performance of PLMs.
External IDs:dblp:conf/icassp/00010T25
Loading