Keywords: Meta-learning, Concept Learning, Curvature-Aware Learning, Higher Order Gradients
TL;DR: We show how meta-learning with higher order gradients can improve both accuracy and sample efficiency in high-dimensional concept learning, a trend which holds as we systematically scale the number of features and complexity of our concept space.
Abstract: Rapidly learning abstract rules from limited examples is a hallmark of human intelligence. This work investigates whether gradient-based meta-learning can equip neural networks with inductive biases for efficient few-shot acquisition of compositional Boolean concepts. We compare meta-learning strategies against a supervised learning baseline on Boolean tasks generated by a probabilistic context-free grammar, varying concept complexity and input dimensionality. Using a consistent multilayer perceptron (MLP) architecture, we evaluate performance based on final validation accuracy and learning efficiency. Our findings indicate that meta-learning, particularly when allowed more adaptation steps, offers significant advantages in data efficiency and final performance on lower-dimensional tasks. However, all methods face challenges as input dimensionality and concept complexity increase, highlighting the intricate interplay between learning strategies, task structure, and data representation in high-dimensional settings.
Student Paper: Yes
Submission Number: 92
Loading