Support-Set Context Matters for Bongard Problems

Published: 10 Oct 2024, Last Modified: 10 Oct 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Current machine learning methods struggle to solve Bongard problems, which are a type of IQ test that requires deriving an abstract “concept” from a set of positive and negative “support” images, and then classifying whether or not a new query image depicts the key concept. On Bongard-HOI, a benchmark for natural-image Bongard problems, most existing methods have reached at best 69% accuracy (where chance is 50%). Low accuracy is often attributed to neural nets’ lack of ability to find human-like symbolic rules. In this work, we point out that many existing methods are forfeiting accuracy due to a much simpler problem: they do not adapt image features given information contained in the support set as a whole, and rely instead on information extracted from individual supports. This is a critical issue, because the “key concept” in a typical Bongard problem can often only be distinguished using multiple positives and multiple negatives. We explore simple methods to incorporate this context and show substantial gains over prior works, leading to new state-of-the-art accuracy on Bongard-LOGO (75.3%) and Bongard-HOI (76.4%) compared to methods with equivalent vision backbone architectures and strong performance on the original Bongard problem set (60.8%). Code is available at https://github.com/nraghuraman/bongard-context.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Upload camera ready.
Video: https://www.youtube.com/watch?v=JO00GQHp0mQ
Code: https://github.com/nraghuraman/bongard-context
Supplementary Material: zip
Assigned Action Editor: ~Yannis_Kalantidis2
Submission Number: 2833
Loading