Abstract: Multitask learning has been a common technique for improving representations learned by artificial neural networks for decades. However, the actual effects and trade-offs are not much explored, especially in the context of document analysis. We demonstrate a simple and realistic scenario on real-world datasets that produces noticeably inferior results in a multitask learning setting than in a single-task setting. We hypothesize that slight data-manifold and task semantic shifts are sufficient to lead to adversarial competition of tasks inside networks and demonstrate this experimentally in two different multitask learning formulations.
Loading