Learning a metacognition for object perceptionDownload PDF

Published: 03 Nov 2020, Last Modified: 22 Dec 2024SVRHM@NeurIPS PosterReaders: Everyone
Keywords: metacognition, generative models, object detection, cognitive science, unsupervised learning
TL;DR: We develop a model for the unsupervised learning of a metacognition for a perceptual system and show that this metacognition can be used to improve the system's accuracy.
Abstract: Beyond representing the external world, humans also represent their own cognitive processes. In the context of perception, this metacognition helps us identify unreliable percepts, such as when we recognize that we are experiencing an illusion. In this paper we propose MetaGen, a framework for the unsupervised learning of metacognition. In MetaGen, metacognition is expressed as a generative model of how a perceptual system transforms raw sensory data into noisy percepts. Using basic principles of how the world works (such as object permanence, part of infants’ core knowledge), MetaGen jointly infers the objects in the world causing the percepts and a representation of its own perceptual system. MetaGen can then use this metacognition to infer which objects are actually present in the world, thereby flagging missed or hallucinated objects. On a synthetic dataset of world states and black-box visual systems, we find that MetaGen can quickly learn a metacognition and improve the system’s overall accuracy, outperforming baseline models that lack a metacognition.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/learning-a-metacognition-for-object/code)
5 Replies

Loading