Robustness to Noisy Labels in Parameter Efficient Fine-tuningDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: As language models grow in size, Parameter Efficient Fine-tuning (PEFT) methods like Low-Rank Adaptation (LoRA) offer compute efficiency while maintaining performance. However, their robustness to label noise, a significant issue in real-world data, remains unexplored. This study investigates whether LoRA-tuned models demonstrate the same level of noise resistance observed in fully fine-tuned Transformer models. Our investigation has multiple key findings: First, we show that LoRA exhibits robustness to random noise similar to full fine-tuning on balanced data, but unlike full fine-tuning, LoRA does not overfit the noisy data. Second, we observe that compared to full fine-tuning, LoRA forgets significantly fewer data points as noise increases. Third, studying how these robustness patterns change as training data becomes imbalanced, we observe that Transformers struggle with imbalanced data, with robustness declining as imbalance worsens. This study highlights LoRA's promise in real-world settings with noise and data imbalance. Overall, our findings reveal LoRA as a robust and efficient alternative for fine-tuning, shedding light on its distinctive characteristics.
Paper Type: short
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Data analysis
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview