Keywords: fractal geometry, representation learning, attractor dynamics, interpretable ML, symbolic abstraction, neural representations, manifold learning, interpretability, neuro-symbolic AI
TL;DR: Rabbit Brain uses fractal attractor geometry to form discrete and interpretable categories, offering a simple symbolic component that can complement manifold-based representations.
Abstract: Representation learning in modern neural networks is typically grounded in the
assumption that data lie on or near smooth manifolds. This supports continuity
and gradient-based optimization but can make it difficult to express stable,
discrete, or symbolic categories. We introduce Neurosymbolic Rabbit Brain,
a framework that models representations using fractal attractor geometry,
where categories are defined by basin membership under simple iterative maps. As
a minimal instantiation, we implement a two-Julia escape-time comparator and
evaluate it on the Two Spirals benchmark using CMA-ES. Across 10 runs,
the baseline model achieves $54.3\% \pm 2.1\%$ test accuracy, exceeding the
logistic baseline ($\sim 50\%$). An enhanced variant which preserves the same
eight parameters but adds log--polar prewarp, smooth escape-time scoring, a
curriculum on iteration depth, and multiple restarts, improves robustness and
reaches $61.9\% \pm 2.1\%$. While not competitive with RBF--SVM, these results
demonstrate that attractor-based basin geometry can function as a simple and
transparent classifier on nonlinear structure, suggesting potential for hybrid
systems that pair continuous manifold encoders with discrete fractal partitions.
Poster Pdf: pdf
Submission Number: 66
Loading