A Modulation Layer to Increase Neural Network Robustness Against Data Quality Issues

Published: 06 Apr 2023, Last Modified: 06 Apr 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Data missingness and quality are common problems in machine learning, especially for high-stakes applications such as healthcare. Developers often train machine learning models on carefully curated datasets using only high-quality data; however, this reduces the utility of such models in production environments. We propose a novel neural network modification to mitigate the impacts of low-quality and missing data which involves replacing the fixed weights of a fully-connected layer with a function of additional input. This is inspired by neuromodulation in biological neural networks where the cortex can up- and down-regulate inputs based on their reliability and the presence of other data. In testing, with reliability scores as a modulating signal, models with modulating layers were found to be more robust against data quality degradation, including additional missingness. These models are superior to imputation as they save on training time by entirely skipping the imputation process and further allow the introduction of other data quality measures that imputation cannot handle. Our results suggest that explicitly accounting for reduced information quality with a modulating fully connected layer can enable the deployment of artificial intelligence systems in real-time applications.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera Ready Version
Video: https://youtu.be/SI-5cuPJV9U
Code: https://github.com/mabdelhack/mfcl
Assigned Action Editor: ~Alessandro_Sperduti1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 745