Abstract: Being able to accurately predict the time to event of interest, commonly known as survival analysis, is extremely beneficial in many real-world applications. Traditional commonly used statistical survival analysis methods, e.g., Cox proportional hazards model and parametric censored regressions, are based on strong and sometimes impractical assumptions and can only handle linearity relationship between features and target. Recently, deep learning based formulations have been proposed for survival analysis to handle non-linearity. However, these existing deep learning methods either inherit strong assumptions from their corresponding base models or tailor discrete-time survival analysis. To overcome the limitations within these existing models in the literature, we propose an objective function to guide the training of a deep learning model for continuous-time survival analysis. The objective function combines both ranking based and point-wise regression based losses. The ranking based loss measures the goodness of the orders of the predicted survival time for all instances. The point-wise based loss measures the difference between the predicted survival time and the true survival time for the right censored time-to-event data. More specifically, we derive two versions of the ranking based loss from the smoothed concordance index, and two versions of point-wise based loss based on the normalized mean squared error (MSE) and mean absolute error (MAE). Thus, the proposed formulation is capable of dealing with the continuous-time survival analysis from both global and local perspectives. We conduct experimental analysis over several large-scale real-world time-to-event datasets, and the results demonstrate that our model outperforms the state-of-the-art survival analysis methods. The codes and data used in the experiments are available in the link 1.1https://github.com/yanlirock/local_global_survival
Loading