Temporal Attention Bottleneck is informative? Interpretability through Disentangled Generative Representations for Energy Time Series Disaggregation

Published: 23 Jun 2023, Last Modified: 18 Dec 2023DeployableGenerativeAIEveryoneRevisions
Keywords: Generative Model, Disentanglement, Learning interpretable representations, Information Theory, Time Series, Energy Disaggregation
TL;DR: Temporal Attention Bottleneck for Variational Auto-encoder - ICML2023
Abstract: Generative models have garnered significant attention for their ability to address the challenge of source separation in disaggregation tasks. This approach holds promise for promoting energy conservation by enabling homeowners to obtain detailed information on their energy consumption solely through the analysis of aggregated load curves. Nevertheless, the model's ability to generalize and its interpretability remain two major challenges. To tackle these challenges, we deploy a generative model called TAB-VAE (Temporal Attention Bottleneck for Variational Auto-encoder), based on hierarchical architecture, addresses signature variability, and provides a robust, interpretable separation through the design of its informative representation of latent space. Our implementation and evaluation guidelines are available at https://github.com/oublalkhalid/TAB-VAE.
Submission Number: 65
Loading