Abstract: Many real-world applications of machine learning involve handling data that is collected over an extended period of time. The longer this time-period, the more likely that the underlying characteristics of this data are to change, potentially leading to a degradation in prediction accuracy and impacting decision-making. This phenomenon, commonly referred to as data drift, poses a risk in the context of medical AI regulation and monitoring. Regulatory bodies must regularly assess previously approved models for their performance on new data, realistically even in scenarios where prediction labels are not yet available, making the tracking of model performance unfeasible. In this paper, our contribution involves introducing a comprehensive framework to estimate the performance drift of a model when evaluated on new unlabelled target data. We introduce a method that assesses both i) the uncertainty in model predictions and ii) the discrimination error between training batches and subsequent test batches, serving as key indicators for identifying drift in AI model performance. We test our framework on simulated drift data where we can control the nature of change, and high-fidelity synthetic primary care data focused on the UK Covid-19 pandemic. Promising results emerge from our experiments, suggesting that the proposed metrics can effectively monitor potential changes in the performance of AI health products post-deployment even in the absence of labelled data.
External IDs:dblp:conf/ida/RotalintiMT24
Loading