- Abstract: Today, more than half a billion people live under the threat of devastating earthquakes. The 21st century has already seen earthquakes kill more than 800,000 people and cause more than 300 billion in damage. Despite these impacts and decades worth of research into the physics of seismic events, existing earthquake predictions are often too inaccurate to be useful for issuing actionable warnings. It is possible that deep learning could help close this gap, but as researchers we must proceed with care. First, there is a limited supply of historical data, and thus overfitting is a key concern. Second, it is important that models' predictions and parameters are interpretable, so that they can be used to generate and validate hypotheses about the underlying physical process. In response, we provide a case study of applying deep learning to forecasting seismic events in Southern California. We replace small components of a popular Hawkes process model for earthquake forecasting with black-box neural networks, with the goal of maintaining a similar level of interpretability as the original model. Using experiments on about three decades of earthquake hypocenter and magnitude estimates, we visualize our learned representations for earthquake events and discuss interpretability-accuracy tradeoffs. Our visualization may be useful to provide refinements to the Utsu/Ohmori law for the time-decay of aftershock productivity (Utsu, 1971).
- Keywords: hawkes process, deep learning, earthquakes
- TL;DR: earthquake prediction