Abstract: Samples with incorrect labels are common in datasets, even annotated by humans. Some approaches have been proposed to alleviate the negative impact of mislabeling on the training process by removing erroneous data or reducing their weights. Unlike previous works, this paper introduces a light yet effective denoising method based on the relationship between the samples within the dataset, namely internal guidance. We examine the method on five datasets with mainstream models. The results demonstrate that this light denoising approach can obtain consistent improvement for all the datasets and models.