- Abstract: True personalised medicine relies on being able to create a learned representation, or embedding, of a patient's health data. While efforts have been made to incorporate some of the data available, there is a distinct lack of imaging modalities used to develop this representation. Due to the complexity of extracting useful features across data types from imaging to text, including overcoming data sparsity, accurately embedding multi-modal data, and ensuring interpretability, design decisions for machine learning models need to take this into account. This article covers the rationale driving the development of personalised patient embeddings, the current approaches used with healthcare data and in the wider realm of multi-modal deep learning as well as shortfalls and challenges. Finally, drawing on current best practices and extensions to single mode learning metrics as well as the shifting focus towards utilising a patient's imaging data for determining relevant clinical factors, a model architecture is proposed and discussed.
- Author Affiliation: Maxwell MRI
- Keywords: embedding, autoencoder, multi-modal data