Abstract: This paper extends Gradient Boosting to multi-task settings, allowing tasks to share features but differ in data distributions. The method uses a two-phase approach: joint learning across tasks followed by task-specific optimization. Results show improved performance compared to independent task models.
Loading