Keywords: Statistical Learning Theory, Replicability, Reproducibility
TL;DR: We study computational aspects of algorithmic replicability with the aim of better understanding the computational connections between replicability and other learning paradigms.
Abstract: We study computational aspects of algorithmic replicability, a notion of stability introduced by Impagliazzo, Lei,
Pitassi, and Sorrell [STOC, 2022]. Motivated by a recent line of work that established strong statistical connections between
replicability and other notions of learnability such as online learning, private learning, and SQ learning, we aim to
understand better the computational connections between replicability and these learning paradigms.
Our first result shows that there is a concept class that is efficiently replicably PAC learnable, but, under standard
cryptographic assumptions, no efficient online learner exists for this class. Subsequently, we design an efficient
replicable learner for PAC learning parities when the marginal distribution is far from uniform, making progress on a
question posed by Impagliazzo et al. [STOC, 2022]. To obtain this result, we design a replicable lifting framework inspired by
Blanc, Lange, Malik, and Tan [STOC, 2023], that transforms in a black-box manner efficient replicable PAC learners under the
uniform marginal distribution over the Boolean hypercube to replicable PAC learners under any marginal distribution,
with sample and time complexity that depends on a certain measure of the complexity of the distribution.
Finally, we show that any pure DP learner can be transformed in a black-box manner to a replicable learner, with time complexity polynomial in the confidence and accuracy parameters, but exponential in the representation dimension of the underlying hypothesis class.
Primary Area: Learning theory
Submission Number: 7242
Loading