Towards Better Out-of-Distribution Generalization of Neural Algorithmic Reasoning Tasks

Published: 18 Mar 2023, Last Modified: 18 Mar 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In this paper, we study the OOD generalization of neural algorithmic reasoning tasks, where the goal is to learn an algorithm (e.g., sorting, breadth-first search, and depth-first search) from input-output pairs using deep neural networks. First, we argue that OOD generalization in this setting is significantly different than common OOD settings. For example, some phenomena in OOD generalization of image classifications such as \emph{accuracy on the line} are not observed here, and techniques such as data augmentation methods do not help as assumptions underlying many augmentation techniques are often violated. Second, we analyze the main challenges (e.g., input distribution shift, non-representative data generation, and uninformative validation metrics) of the current leading benchmark, i.e., CLRS \citep{deepmind2021clrs}, which contains 30 algorithmic reasoning tasks. We propose several solutions, including a simple-yet-effective fix to the input distribution shift and improved data generation. Finally, we propose an attention-based 2WL-graph neural network (GNN) processor which complements message-passing GNNs so their combination outperforms the state-of-the-art model by a $3\%$ margin averaged over all algorithms.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/smahdavi4/clrs
Assigned Action Editor: ~Changyou_Chen1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 554
Loading