VidEEG-Gen: A Dataset and Diffusion Framework for Video-Conditioned Privacy-Preserving EEG Generation

20 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Video stimulus/individual-conditioned EEG generation dataset (VidEEG-Gen), Self-Play Graph Network (SPGN), Graph-Enhanced Diffusion, Denoising Diffusion Probabilistic Model(DDPM)
TL;DR: This paper proposes a novel stimulus-/subject-conditional EEG generation task, introduces VidEEG-Gen (with a SPGN model), and enables privacy-preserving, scalable EEG synthesis for emotion analysis and brain-computer interfaces.
Abstract: Recent advancements in multimodal learning have revolutionized text, video, and audio processing, yet Electroencephalography (EEG) research lags due to data scarcity from specialized equipment and privacy risks in personal signal sharing. These limitations, coupled with the shortcomings of prior generative models that produce signals lacking spatiotemporal coherence, biological plausibility, and stimulus-response alignment, hinder the development of EEG-based applications, such as emotion analysis and brain-computer interfaces, by restricting access to diverse, high-quality data. The absence of a dedicated task for modeling the mapping from naturalistic video stimuli to personalized EEG responses has impeded progress in privacy-preserving EEG synthesis. To advance the field, we propose the task of stimulus-/subject-conditional EEG generation under naturalistic stimulation, which is crucial for enabling low-cost, scalable data generation while addressing ethical concerns. To support this task, we introduce Video stimulus/individual-conditioned EEG generation dataset (VidEEG-Gen), a unified dataset and generation framework for video-conditioned privacy-preserving EEG synthesis. VidEEG-Gen features 1007 aligned video-EEG generation samples that synchronize natural video stimuli with synthetic EEG dynamics. At its core, VidEEG-Gen employs a Self-Play Graph Network (SPGN), a graph-enhanced diffusion model specifically designed for modeling spatiotemporal EEG patterns conditioned on visual input. This integrated approach provides a foundation for emotion analysis, data augmentation, and brain-computer interfaces. We further establish a dedicated evaluation system to assess EEG generation quality in dynamic visual perception tasks. In benchmark visual stimulus experiments, the SPGN model within VidEEG-Gen achieved a signal stability index of 0.9363 and a comprehensive performance index of 0.9373. The source code and dataset will be made publicly available upon acceptance.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 22779
Loading