Abstract: A major barrier to deploying current smart models lies in their non-reliability to dynamic environments. Prediction models, although proficient in delivering accurate predictions for the fixed training data, cannot ensure robust performance in all novel environments. Nonetheless, conventional systems often rely on human experts to discern when to permit the system to autonomously handle tasks or when the human expert should provide an opinion? We propose a novel design AdaShifter: Adaptive Online Shifter via an online incremental learning process, in which the algorithm is an intermediary layer between prediction models and downstream human experts that aims to request human experts only when it is likely to be beneficial for their annotations. The results of a large-scale experiment show that our algorithm manages to request human experts at times of need and to significantly improve annotation compared to fixed, non-interactive, requesting approaches.
Loading