Keywords: Time series, Generative models, Mode Collapse
Abstract: Generative models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and diffusion models often suffer from mode collapse, failing to reproduce the full diversity of their training data. While this problem has been extensively studied in image generation, it remains largely unaddressed for time series. We introduce a formal definition of mode collapse for time series and propose DMD-GEN, a geometry-aware metric that quantifies its severity. DMD-GEN leverages Dynamic Mode Decomposition (DMD) to extract coherent temporal structures and uses Optimal Transport between DMD eigenvectors to measure discrepancies in underlying dynamics. By representing the subspaces spanned by the DMD eigenvectors as point structures on a Grassmann manifold, and comparing them via Wasserstein distances computed from principal angles, DMD-GEN enables a principled geometric comparison between real and generated sequences. The metric is efficient, requiring no additional training, supports mini-batch evaluation, and is easily parallelizable. Beyond quantification, DMD-GEN offers interpretability by revealing which dynamical modes are distorted or missing in the generated data. Experiments on synthetic and real-world datasets using TimeGAN, TimeVAE, and DiffusionTS show that DMD-GEN aligns with existing metrics while providing the first principled framework for detecting and interpreting mode collapse in time series.
Supplementary Material: zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 21522
Loading