Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Multi-layer Representation Learning for Medical Concepts
Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Elizabeth Searles, Catherine Coffey
Feb 17, 2016 (modified: Feb 17, 2016)ICLR 2016 workshop submissionreaders: everyone
Abstract:Learning efficient representations for concepts has been proven to be an important basis for many applications such as machine translation or document classification.
Proper representations of medical concepts such as diagnosis, medication, procedure codes and visits will have broad applications in healthcare analytics.
However, in Electronic Health Records (EHR) the visit sequences of patients include multiple concepts (diagnosis, procedure, and medication codes) per visit.
This structure provides two types of relational information, namely sequential order of visits and co-occurrence of the codes within each visit.
In this work, we propose Med2Vec, which not only learns distributed representations for both medical codes and visits from a large EHR dataset with over 3 million visits, but also allows us to interpret the learned representations confirmed positively by clinical experts.
In the experiments, Med2Vec displays significant improvement in key medical applications compared to popular baselines such as Skip-gram, GloVe and stacked autoencoder, while providing clinically meaningful interpretation.
Enter your feedback below and we'll get back to you as soon as possible.