Abstract: This paper proposes Gradient Boosting-based training methods for deep models, namely GB-CNN and GB-DNN. The approach iteratively adds layers that learn residual errors while freezing previously trained weights. Results show strong performance in image classification and tabular tasks.
Loading