Bounding Neyman-Pearson Region with f-Divergences
Abstract: The Neyman-Pearson region of a simple binary hypothesis testing is the set of
points whose coordinates represent the false positive rate and false negative rate
of some test. The lower boundary of this region is given by the Neyman-Pearson
lemma, and is up to a coordinate change, equivalent to the optimal ROC curve. We
establish a novel lower bound for the boundary in terms of any f-divergence. Since
the bound generated by hockey-stick f-divergences characterizes the Neyman
Pearson boundary, this bound is best possible. In the case of KL divergence,
this bound improves Pinsker’s inequality. Furthermore, we obtain a closed-form
refined upper bound for the Neyman-Pearson boundary in terms of the Chernoff
α-coefficient. Finally, we present methods for constructing pairs of distributions
that can approximately or exactly realize any given Neyman-Pearson boundary.
Loading