Keywords: Bayesian, Probablistic modeling, Meta-cognition, Neurosymbolic AI
TL;DR: Inspired by human meta-cognition, we present MetaCOG: a hierarchical probabilistic model that can be attached to a neural object detector to monitor its outputs and determine their reliability.
Abstract: Humans have the capacity to question what we see and to recognize when our vision is unreliable (e.g., when we realize that we are experiencing a visual illusion). Inspired by this capacity, we present MetaCOG: a hierarchical probabilistic model that can be attached to a neural object detector to monitor its outputs and determine their reliability. MetaCOG achieves this by learning a probabilistic model of the object detector’s performance via Bayesian inference—i.e., a meta-cognitive representation of the network’s propensity to hallucinate or miss different object categories. Given a set of video frames processed by an object detector, MetaCOG performs joint inference over the underlying 3D scene and the detector’s performance, grounding inference on a basic assumption of object permanence. Paired with three neural object detectors, we show that MetaCOG accurately recovers each detector’s performance parameters and improves the overall system’s accuracy. We additionally show that MetaCOG is robust to varying levels of error in object detector outputs, showing proof-of-concept for a novel approach to the problem of detecting and correcting errors in vision systems when ground-truth is not available.
Supplementary Material: zip
List Of Authors: Berke, Marlene and Azerbayev, Zhangir and Belledonne, Mario and Tavares, Zenna and Jara-Ettinger, Julian
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/marleneberke/MetaCOG/tree/main
Video Url: https://github.com/marleneberke/MetaCOG/tree/main/demos
Submission Number: 477
Loading