ConditionGen: Controllable EEG Synthesis via Artifact-Conditioned Diffusion
Keywords: EEG, Time-series diffusion, Conditional generation, Artifacts, Healthcare ML, Data augmentation, Evaluation metrics, Signal processinng
TL;DR: A FiLM-conditioned 1D diffusion model generates EEG with controllable artifacts, and a domain-specific evaluation reveals both strengths and current failure modes to guide practical use.
Abstract: Reliable EEG research and deployment are hampered by scarce, messy, and artifact-ridden recordings. We introduce ConditionGen, a diffusion-based generator for controllable EEG synthesis that conditions on clinically relevant factors, including artifact type (none/eye/muscle/chewing/shiver/electrode), artifact intensity, seizure flag, age bin, and montage. Our model uses a 1D UNet with FiLM-style conditioning and samples with classifier-free guidance. To move beyond image-style heuristics, we establish a task-appropriate evaluation suite: (i) fidelity via Welch band-power deltas (δ/θ/α/β), channel-covariance Frobenius distance, and ACF-L2; (ii) specificity via artifact-classifier “recovery” (does class-conditional synthesis look like the intended artifact to independent EEG classifiers?); and (iii) utility via augmentation gains when mixing synthesized segments into artifact-recognition training. Across artifact conditions, we report consistent fidelity rankings and surface clear failure modes in recovery/utility that guide next-step improvements (e.g., stronger conditioning signals, artifact-stratified real pools, schedule/sampler co-design). We release a lightweight, end-to-end scriptable pipeline that samples per-artifact cohorts and writes all metrics/tables in one pass, enabling reproducible comparisons across methods and settings.
Submission Number: 161
Loading