GRAD-T: Graph Regularized Attention-based Diffusion Model for Analysis of Contextual Emotion Contagion

TMLR Paper4656 Authors

12 Apr 2025 (modified: 25 Apr 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We propose a Graph Regularized-Attention-Based Diffusion Transformer (GRAD-T) model, which uses kernel temporal attention and a regularized sparse graph method to analyze model general diffusion processes over networks. The proposed model uses the spatiotemporal nature of data generated from diffusion processes over networks to examine phenomena that vary across different locations and time, such as disease outbreaks, climate patterns, ecological changes, information flows, news contagion, transportation flows or information and sentiment contagion over social networks. The kernel attention models the temporal dependence of diffusion processes within locations, and the regularized spatial attention mechanism accounts for the spatial diffusion process. The proposed regularization using a combination of penalized matrix estimation and a resampling approach helps in modeling high-dimensional data from large graphical networks, and identify the dominant diffusion pathways. We use the model to predict how emotions spread across sparse networks. We applied our model to a unique dataset of COVID-19 tweets that we curated, spanning March to December 2020 across various U.S. locations. We used model parameters (attention measures) to create indices for comparing emotion diffusion potential within and between nodes. Our findings show that negative emotions like fear, anger, and disgust demonstrate substantial potential for temporal and spatial diffusion. We will release the dataset for public consumption. Using the dataset and the proposed method we demonstrate that different types of emotions exhibit different patters of temporal and spatial diffusion. We show that our model improves prediction accuracy of emotion diffusion over social medial networks over standard models such as LSTM and CNN methods. Our key contribution is the regularized graph transformer using a penalty and a resampling approach to enhance the robustness, interpretability, and scalability of sparse graph learning.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Added references, corrected punctuation and grammar.
Assigned Action Editor: ~Manuel_Gomez_Rodriguez1
Submission Number: 4656
Loading