Divide and Conquer NetworksDownload PDF

15 Feb 2018 (modified: 15 Sept 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we leverage with neural architectures that are defined recursively and dynamically, by learning two scale- invariant atomic operations: how to split a given input into smaller sets, and how to merge two partially solved tasks into a larger partial solution. Our model can be trained in weakly supervised environments, namely by just observing input-output pairs, and in even weaker environments, using a non-differentiable reward signal. Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation. We demonstrate the flexibility and efficiency of the Divide- and-Conquer Network on several combinatorial and geometric tasks: convex hull, clustering, knapsack and euclidean TSP. Thanks to the dynamic programming nature of our model, we show significant improvements in terms of generalization error and computational complexity.
TL;DR: Dynamic model that learns divide and conquer strategies by weak supervision.
Keywords: Neural Networks, Combinatorial Optimization, Algorithms
Code: [![github](/images/github_icon.svg) alexnowakvila/DiCoNet](https://github.com/alexnowakvila/DiCoNet)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/divide-and-conquer-networks/code)
6 Replies

Loading