Keywords: meta-learning, few-shot learning, federated learning, scientific machine learning, inverse problems, controlled experiments, negative results, physics-driven learning, domain shift
TL;DR: Few-shot and meta-learning fail to improve generalization in a controlled photonic inverse design setting, while federated meta-learning faithfully preserves centralized behavior without performance gains.
Abstract: Few-shot and meta-learning are frequently proposed as mechanisms for rapid adaptation in scientific inverse problems, yet the conditions under which they provide genuine generalization benefits remain poorly understood. We conduct a controlled empirical investigation of this question in the context of photonic inverse design, comparing transfer learning, centralized model-agnostic meta-learning, and federated meta-learning under structured physical domain shift and strict data locality constraints. Using a large-scale, physics-consistent synthetic dataset of 500,000 photonic grating coupler simulations, we induce a deterministic non-IID setting by partitioning data across federated clients according to a single physical parameter, the grating period. All methods are evaluated within a unified experimental framework with identical architectures, optimization procedures, and statistically stabilized evaluation protocols. We observe that transfer learning exhibits strong zero-shot generalization and achieves the lowest absolute error across all regimes. In contrast, both centralized and federated meta-learning display decreasing support-set loss during adaptation without corresponding improvements in test performance. Moreover, federated meta-learning closely matches centralized meta-learning without statistically significant degradation, indicating that federation preserves learning dynamics while primarily offering privacy and decentralization rather than intrinsic performance gains. These results provide a controlled falsification of common assumptions about few-shot adaptation in physics-driven inverse problems and help delineate the practical limits of meta-learning in high-dimensional scientific regression.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Style Files: I have used the style files.
Submission Number: 11
Loading