- Abstract: Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a functional program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate improved performances in all tasks by learning progressively. By evaluating the reasoning process using human judges, we show that our model is more interpretable than an attention-based baseline.
- Data: [CLEVR](https://paperswithcode.com/dataset/clevr), [COCO](https://paperswithcode.com/dataset/coco), [Visual Genome](https://paperswithcode.com/dataset/visual-genome), [Visual Question Answering v2.0](https://paperswithcode.com/dataset/visual-question-answering-v2-0)