Abstract: The dominant paradigm in machine intelligence defines learning as the minimisation of an empirical loss function. While successful, this approach often results in brittle systems prone to catastrophic forgetting and reward hacking, contrasting with the homeostatic stability observed in biological organisms. We propose Constraint‑First Machine Learning (CFML), a paradigm that redefines learning as the maintenance of feasibility under an expanding set of structural constraints rather than the pursuit of a global optimum. Utilising Viability Projection Updates (VPU) based on stochastic differential inclusions, we demonstrate that learning can occur as a stochastic drift within a viability manifold; for data‑driven constraints, only a tiny Constraint Anchor Set (e.g., five exemplars per class) is stored to detect violations, with no full replay required. Our evaluations on ResNet‑18 architectures demonstrate that CFML sustains 85.9% final accuracy and 93.3% relative retention across sequential tasks, whereas Elastic Weight Consolidation retains only 42% and naive SGD collapses to chance level. Even experience replay, with the same memory budget, gradually erodes safety boundaries. Furthermore, we show that CFML resolves localised conflicts where structural invariants directly contradict empirical data—a scenario where standard optimisation either violates safety or suffers significant utility stagnation, while Lagrangian regularisation leads to complete utility collapse. By shifting the objective of machine intelligence from minimisation to viability, CFML provides a mathematically rigorous framework for safe, stable, and biologically plausible lifelong learning.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Pablo_Samuel_Castro1
Submission Number: 8668
Loading