Abstract: Deep learning algorithms have achieved remarkable success in various domains; yet, training them in environments with limited data remains a significant hurdle, owing to their reliance on millions of parameters. This paper addresses the intricate issue of training under data scarcity, introducing two novel techniques: Guided DropBlock and Filter Augmentation for resource-constrained deep learning scenarios. Guided DropBlock is inspired by the DropBlock regularization method. Unlike its predecessor, which randomly omits a contiguous segment of the image, the proposed approach is more selective, focusing the omission on the background and specific blocks that carry critical semantic information about the objects in question. On the other hand, the filter augmentation technique we propose involves performing a series of operations on the Convolutional Neural Network (CNN) filters during the training phase. Our findings indicate that integrating filter augmentation while fine-tuning the CNN model can substantially enhance performance in data-limited situations. This approach results in a smoother decision boundary and behavior resembling an ensemble model. Imposing these additional constraints on loss optimization helps mitigate the challenges posed by data scarcity, ensuring robust feature extraction from the input signal, even when some learnable parameters within the CNN layers are frozen. We have validated these enhancements on seven publicly accessible benchmark datasets, as well as two real-world use cases, namely, identifying newborns and monitoring post-cataract surgery conditions, providing empirical support for our claims.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=1cd6w9MGdz
Changes Since Last Submission: times package has been removed from the .tex file to make the format consistent with the TMLR font format
Assigned Action Editor: ~Krzysztof_Jerzy_Geras1
Submission Number: 1786
Loading