Pushing the Limits of Gradient Descent for Efficient Learning on Large Images

TMLR Paper2142 Authors

06 Feb 2024 (modified: 21 Jun 2024)Under review for TMLREveryoneRevisionsBibTeX
Abstract: Traditional CNN models are trained and tested on relatively low resolution images ($<300$ px), and cannot be directly operated on large-scale images due to compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an effective learning strategy that allows to train the existing CNN architectures on large-scale images in an end-to-end manner. PatchGD is based on the hypothesis that instead of performing gradient-based updates on an entire image at once, it should be possible to achieve a good solution by performing model updates on only small parts of the image at a time, ensuring that the majority of it is covered over the course of iterations. PatchGD thus extensively enjoys better memory and compute efficiency when training models on large scale images. PatchGD is thoroughly evaluated on two datasets - PANDA and UltraMNIST with ResNet50 and MobileNetV2 models under different memory constraints. Our evaluation clearly shows that PatchGD is much more stable and efficient than the standard gradient-descent method in handling large images, and especially when the compute memory is limited.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Ozan_Sener1
Submission Number: 2142
Loading