Keywords: Large Language Models, Fine Tuning, Hallucination Mitigation, Adaptive Noise Injection, Hybrid Loss
Abstract: Large language models (LLMs) often produce inaccurate or misleading content—hallucinations. To address this challenge, we introduce Noise-Augmented Fine-Tuning (NoiseFiT), a novel framework that leverages adaptive noise injection based on the signal-to-noise ratio (SNR) to enhance model robustness. Our contribution is threefold. First, NoiseFiT selectively perturbs layers identified as either high-SNR (more robust) or low-SNR (potentially under-regularized) using a dynamically scaled Gaussian noise. Second, we further propose a hybrid loss that combines standard cross-entropy, soft cross-entropy, and consistency regularization to ensure stable and accurate outputs under noisy training conditions. Third, a theoretical analysis proposed shows that adaptive noise injection is both unbiased and variance-preserving, providing strong guarantees for convergence in expectation. Moreover, empirical results on multiple test and benchmark datasets demonstrate that NoiseFiT significantly reduces hallucination rates, often improving or matching baseline performance in key tasks. These findings highlight the promise of noise-driven strategies for achieving robust, trustworthy language modeling without incurring prohibitive computational overhead. We have publicly released the fine-tuning logs, benchmark evaluation artifacts, and source code online at W&B, Hugging Face, and GitHub, respectively, to foster further research, accessibility and reproducibility.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 16652
Loading