From Kakeya to Kernels: A Multi-Scale Geometric Framework for Robust Representation Learning

16 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Representation learning, multi-scale analysis, deep learning theory, multimodal learning, Kakeya conjecture
TL;DR: This paper introduces a multi-scale geometric framework that defines representation fields and proves a Sticky Representation Theorem linking geometric "stickiness" to robustness.
Abstract: This paper addresses the gap between the empirical efficacy of deep learning and the theoretical understanding of its robustness by introducing a novel geometric framework for representation learning, inspired by multi-scale analysis techniques used to resolve the Kakeya set conjecture. The concept of a representation field is formalized, modeling feature activations as geometric entities, and the notion of "stickiness" is defined as the stability of the geometric structure across network layers. The multi-scale Wolff axioms quantify this stability as a formal measure of representation quality. The principal contribution is the Sticky Representation Theorem, which establishes a provable relationship between a network's geometric stickiness and its functional robustness to input perturbations and resilience to missing modalities in multimodal settings. To operationalize this theoretical framework, the Katz-Tao Convex Wolff (KT-CW) Regularizer is derived as an architecture-agnostic loss term that can potentially incentivize the learning of provably robust, sticky representations. This work presents a new, unified approach for analyzing, understanding, and constructing more reliable AI systems within both single- and multi-modal contexts.
Primary Area: learning theory
Submission Number: 6512
Loading