Abstract: This paper proposes a gradient boosting-based method to train neural networks by sequentially learning portions of the final layers. Each step fits pseudo-residuals from previous iterations, forming an additive model. The approach improves learning efficiency compared to standard full-network training.
Loading