Detecting cognitive impairments by agreeing on interpretations of linguistic features

Anonymous

Jul 26, 2018 Blind Submission readers: everyone Show Bibtex
  • Abstract: Linguistic features have shown promising applications for detecting various cognitive impairments. To improve detection accuracies, increasing the amount of data or linguistic features have been two applicable approaches. However, acquiring additional clinical data could be expensive, and hand-carving features are hard. In this paper, we take a third approach, putting forward the scheme ``diagnosis after reaching consensus'', where non-overlapping subsets (modalities) of linguistic features are compressed into low-dimension interpretation vectors by neural networks (''ePhysicians''). By encouraging interpretation vectors from multiple modalities to be indistinguishable, the ``ePhysicians'' extract important information for classification. We show that with the same subsets of features, our models outperform baseline neural network classifiers on clinical data. Using all of the 413 linguistic features, our best models have accuracies in detecting cognitive impairments comparable to the state-of-the-art models on several balanced datasets (.82 on DementiaBank in detecting Alzheimer's Disease (AD) and .66 in detecting Mild Cognitive Impairment (MCI)).
  • TL;DR: Consensus Networks
0 Replies

Loading