Reviewer eeAp
Q1 Summary And Contributions:
The authors seek to produce post-hoc explanations of black-box models that optimize for a linear combination of useful properties, namely (specific notions of) faithfulness, robustness, smoothness, and sparsity. They do so by proposing learning/optimization algorithms -- based on pure mathematical optimization and on MAP inference on a related Gaussian Process -- that explicitly solve for the aforementioned properties, covering two distinct regimes: inductive vs transductive. Experiments in the two settings on simple functions and neural networks suggest that the proposed setup meets the expectations at least when dealing with simpler functions/lower dimensional inputs.

Q2-1 Originality-Novelty: 3: Good: The paper makes non-trivial advances over the current state-of-the-art.
Q2-2 Correctness-Technical Quality: 3: Good: The paper appears to be technically sound, but I have not carefully checked the details.
Q2-3 Extent To Which Claims Are Supported By Evidence: 2: Fair: the main claims are somewhat supported by evidence (but the experimental evaluation may be weak, or does not match entirely with the claims, important baselines may be missing, proofs contain important ideas but lack rigor, algorithmic details are only discussed superficially, references are imprecise, assumptions are not sufficiently motivated or explicated, etc.).
Q2-4 Reproducibility: 1: Poor: key details (e.g. proof sketches, experimental setup) are incomplete/unclear, or key resources (e.g. proofs, code, data) are unavailable
Q2-5 Clarity Of Writing: 4: Excellent: The paper is well-organized and clearly written.
Q3 Main Strengths:
Novelty
Treating post-hoc explainability explicitly as mathematical optimization is quite refreshing (although one could argue most post-hoc approaches do so implicitly).
The link to GP maximization is interesting and novel.
I also appreciated the distinction between inductive and transductive settings.
The proposed setup is quite open and I am confident it will open the door to potential follow-ups.
Correctness
To the best of my understanding, the method appears to be sound.
The experiments are set up correctly.
Evidence
Support is quite good when targeting generating "explanations" of simpler functions.
Clarity
The text is very well written and easy to follow.