Measuring hallucination in disentangled representations

Published: 01 Jan 2024, Last Modified: 15 Apr 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Disentanglement is a key challenge in representation learning as it may enable several downstream tasks including edition operation at a high semantic level or privacy-preserving applications. While much effort has been put into the design of disentanglement methods and on how to evaluate their disentanglement performance no real studies put in evidence nor proposed to measure the hallucination that may occur in such disentangled representation spaces. This study focuses on characterizing, measuring and investigating hallucination in representation space learnt by state-of-the-art disentanglement methods.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview