Keywords: Singular learning theory, Bayesian inference, Turing machines, Linear logic, Error syndromes, Real algebraic geometry
TL;DR: This paper shows that the internal structure of computer programs can be understood through the geometry of "singularities" by analyzing how different patterns of execution errors (called "error syndromes") affect program outputs.
Abstract: We develop a correspondence between the structure of Turing machines and the
structure of singularities of real analytic functions, based on connecting the Ehrhard-
Regnier derivative from linear logic with the role of geometry in Watanabe’s singular
learning theory. The correspondence works by embedding ordinary (discrete) Turing
machine codes into a family of “noisy” codes which form a smooth parameter space.
On this parameter space we consider a potential function which has Turing machines
as critical points. By relating the Taylor series expansion of this potential at such a
critical point to combinatorics of error syndromes, we relate the local geometry to
internal structure of the Turing machine.
The potential in question is the negative log-likelihood for a statistical model,
so that the structure of the Turing machine and its associated singularity is further
related to Bayesian inference. Two algorithms that produce the same predictive
function can nonetheless correspond to singularities with different geometries, which
implies that the Bayesian posterior can discriminate between distinct algorithmic
implementations, contrary to a purely functional view of inference. In the context
of singular learning theory our results point to a more nuanced understanding of
Occam’s razor and the meaning of simplicity in inductive inference.
Serve As Reviewer: ~Mikhail_Samin1, ~John_Little1
Confirmation: I confirm that I and my co-authors have read the policies are releasing our work under a CC-BY 4.0 license.
Submission Number: 1
Loading