The Geometry of Stability: A Cohomological View on Preference Cycles and Algorithmic Robustness

TMLR Paper5618 Authors

13 Aug 2025 (modified: 28 Aug 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Algorithmic stability—the robustness of predictions to training data perturbations—is fundamental to reliable machine learning. While methods like bagging, regularization, and inflated operators improve stability, they appear as disconnected techniques. We propose a unified mathematical framework demonstrating that algorithmic instability often arises from fundamental inconsistencies in local data preferences, mathematically analogous to Condorcet cycles in social choice theory. We formalize these inconsistencies as cohomological obstructions ($H^1 \neq 0$), leveraging established connections between social choice theory and algebraic topology. This framework reveals bagging as a strategy for obstruction prevention (smoothing the preference landscape) and inflated operators as a strategy for obstruction resolution (target space enlargement). Furthermore, we derive a novel technique from this framework, obstruction-aware regularization, which directly enforces mathematical consistency. We provide direct empirical validation for our claims. First, we demonstrate that engineered Condorcet cycles induce high instability in standard methods, which is resolved by inflated operators. Second, using Hodge decomposition, we confirm that bagging significantly reduces the magnitude of cohomological obstructions. Third, we show that our proposed obstruction-aware regularization successfully reduces mathematical inconsistencies and yields substantial improvements across multiple metrics of algorithmic stability.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Alberto_Bietti1
Submission Number: 5618
Loading