Keywords: EEG, neonatal seizure detection, deep learning, modelling individual experts
TL;DR: Deep learning models that incorporate variance in expert annotations during training outperform those trained on consensus annotations in automated neonatal seizure detection experiments.
Abstract: Developing algorithms to detect seizures in neonatal electroencephalogram (EEG) signals is an important area of research. Identifying neonatal seizures is a time-consuming process that requires specially trained experts. Most neonatal seizure detection algorithms use supervised learning and require large datasets of labelled EEG for training. However, EEG is a complex physiological signal, and expert annotators often have disagreements when identifying seizures in infants. Most studies with multiple expert annotators compress the annotations down to one ‘ground truth’ set of labels during algorithm training, this may lead to a loss of valuable information. This study investigates if preserving the disagreement of multiple expert annotators during training improves model performance. Three variations of a deep learning architecture are compared experimentally; each one varies in how annotator disagreements are accounted for. The results indicate that there is value in modelling expert annotations separately in supervised learning algorithms. This study proposes architectures that harness expert variability by learning from both the agreement and disagreement in an open-source dataset of neonatal EEGs.
Track: 4. AI-based clinical decision support systems
Supplementary Material: pdf
Registration Id: 9NNF8P9H2N2
Submission Number: 39
Loading