Attribute Based Interpretable Evaluation Metrics for Generative Models

21 Apr 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: evaluation metric for Generative models
Abstract:

While generative models continue to evolve, the field of evaluation metrics has largely remained stagnant. Despite the annual publication of metric papers, the majority of these metrics share a common characteristic: they measure distributional distance using pre-trained embeddings without considering the interpretability of the underlying information. This limits their usefulness and makes it difficult to gain a comprehensive understanding of the data. To address this issue, we propose using a new type of interpretable embedding. We demonstrate how we can transform deeply encoded embeddings into interpretable embeddings by measuring their correspondence with text attributes. With this new type of embedding, we introduce two novel metrics that measure and explain the diversity of the generator: the first metric compares the frequency of appearance of the training set and the attribute, and the second metric evaluates whether the relationships between attributes in the training set are preserved. By introducing these new metrics, we hope to enhance the interpretability and usefulness of evaluation metrics in the field of generative models.

Supplementary Material: zip
Submission Number: 320
Loading