Improving Neural Program Induction by Reflecting on Failures

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: program induction
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: A neural program induction model is considered good if it is capable of learning programs with four objectives: (1) higher data efficiency, (2) a more efficient training process, (3) better performance, and (4) generalization for large-scale tasks. However, the neural program induction/synthesis models suffer from requiring a large amount of training iterations and training examples for training. Besides, the current state-of-the-art neural program induction models are still far from perfect in terms of performance and generalization when dealing with tasks that require complex task-solving logic. To mitigate these challenges, in this work, we present a novel framework called FRGR (Failure Reflection Guided Regularizer). Our proposed framework dynamically summarizes error patterns from the model’s previous behavior and actively constrains the model from repeating mistakes of such patterns during training. In this way, the model is expected to converge faster and more data-efficiently as well as being less likely to fall into local optimum by making fewer mistakes of similar patterns. We evaluate FRGR based on multiple relational reasoning and decision-making tasks under both the data-rich and data-scarce settings. Experimental results show the effectiveness of FRGR in improving training efficiency, performance, generalization as well as data efficiency.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5038
Loading