SGM: A Statistical Gödel Machine for Risk-Controlled Self-Improvement

19 Sept 2025 (modified: 12 Feb 2026)ICLR 2026 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Recursive self-modification, Statistical risk control, Confirm-Triggered Harmonic Spending, Anytime-valid testing, AutoML, Reinforcement learning
TL;DR: Statistical Gödel Machine: a safety layer that certifies improvements and controls risk in self-modifying learning systems.
Abstract: Recursive self-modification is increasingly central in AutoML, neural architecture search, and adaptive optimization, yet no existing framework ensures that such changes are made safely. Gödel machines offer a principled safeguard by requiring formal proofs of improvement before rewriting code; however, such proofs are unattainable in stochastic, high-dimensional settings. We introduce the Statistical Gödel Machine (SGM), the first statistical safety layer for recursive edits. SGM replaces proof-based requirements with statistical confidence tests (e-values, Hoeffding bounds), admitting a modification only when superiority is certified at a chosen confidence level, while allocating a global error budget to bound cumulative risk across rounds. We also propose Confirm-Triggered Harmonic Spending (CTHS), which indexes spending by confirmation events rather than rounds, concentrating the error budget on promising edits while preserving familywise validity. Experiments across supervised learning, reinforcement learning, and black-box optimization validate this role: SGM certifies genuine gains on CIFAR-100, rejects spurious improvement on ImageNet-100, and demonstrates robustness on RL and optimization benchmarks. Together, these results position SGM as foundational infrastructure for continual, risk-aware self-modification in learning systems. Code is available at: https://github.com/gravitywavelet/sgm-anon.
Primary Area: optimization
Submission Number: 18952
Loading