Abstract: Training on edge devices poses several challenges as these devices are generally resource-constrained, especially in terms of power.
State-of-the-art techniques at the device level reduce the GPU frequency to enforce power constraints, leading to a significant increase in training time. To accelerate training, we propose to jointly adjust the system and application parameters (in our case, the GPU frequency and the batch size of the training task) while adhering to the power constraints on devices. We introduce a novel cross-layer methodology that combines predictions of batch size efficiency and device profiling to achieve the desired optimization. Our evaluation on real hardware shows that our method outperforms the current baselines that depend on state of the art techniques, reducing the training time by $2.4\times$ with results very close to optimal. Our measurements also indicate a substantial reduction in the overall energy used for the training process. These gains are achieved without reduction in the performance of the trained model.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - Periodic update of LUTs discussion in Section 4.1
- Additional time and energy results for EfficientViT in Section 5
- Additional visualization configurations, power, and training time in Appendix A.5
The changes are highlighted in blue.
Assigned Action Editor: ~Yoshitomo_Matsubara1
Submission Number: 4267
Loading