DRMLP: Dynamic Regularized Multi-Layer Perceptron for Neural Granger Causality Discovery with Adaptive Temporal Penalties
Keywords: time series, causal discovery, deep learning, regularization, mlp
TL;DR: A dynamic regularization approach for Granger-based causal discovery achieves superior performance on simulated and real-world time series data.
Abstract: With the rapid development of IoT devices, collecting multivariate time series data has become increasingly convenient. Understanding the causal relationships among different time series variables is critical for validating causal discovery methods and benchmarking their ability to recover ground-truth interactions in controlled synthetic environments. However, existing Granger causality approaches based on neural networks typically require modeling each time series variable separately and assume that the influence of historical values decays over time. These limitations result in complex models and poor performance in discovering causality in time series with long-range dependencies. To address these drawbacks, this paper proposes a model called DRMLP: Dynamic Regularized Multi-Layer Perceptron, a Granger causality discovery method capturing periodic temporal dependencies from the input weights of a convolutional network. The proposed approach employs a dual-branch neural network architecture: a linear causal discovery network is utilized to extract causal relations from sampled weight data, while a hierarchical regularization strategy is introduced to optimize the weights of the convolutional network. This design enhances the accuracy of causal relation discovery and reduces noise interference, thereby ensuring the temporal consistency of the identified causal structures. Experiments conducted on simulated datasets and real-world system-generated datasets show that DRMLP outperforms state-of-the-art baseline methods.
Primary Area: causal reasoning
Submission Number: 2610
Loading