Approval policies for modifications to machine learning-based software as a medical device: A study of bio-creep
Abstract: Successful deployment of machine learning algorithms in healthcare requires careful assessments of their
performance and safety. To date, the FDA approves locked algorithms prior to marketing and requires future updates to
undergo separate premarket reviews. However, this negates a key feature of machine learning–the ability to learn from a
growing dataset and improve over time. This paper frames the design of an approval policy, which we refer to as an automatic
algorithmic change protocol (aACP), as an online hypothesis testing problem. As this process has obvious analogy with
noninferiority testing of new drugs, we investigate how repeated testing and adoption of modifications might lead to gradual
deterioration in prediction accuracy, also known as “biocreep” in the drug development literature. We consider simple policies
that one might consider but do not necessarily offer any error-rate guarantees, as well as policies that do provide error-rate
control. For the latter, we define two online error-rates appropriate for this context: Bad Approval Count (BAC) and Bad
Approval and Benchmark Ratios (BABR). We control these rates in the simple setting of a constant population and data source
using policies aACP-BAC and aACP-BABR, which combine alpha-investing, group-sequential, and gate-keeping methods. In
simulation studies, bio-creep regularly occurred when using policies with no error-rate guarantees, whereas aACP-BAC and
-BABR controlled the rate of bio-creep without substantially impacting our ability to approve beneficial modifications.
0 Replies
Loading