Keywords: Human-Like Consciousness, Conscious AI Agents, Pretty Hard Problem, Sensory Mimics, Partial Zombies, Qualia Geometry
TL;DR: Thought experiments with "sensory mimics" suggest that AI agents could be constructed that understand their surrounding situation yet do not experience qualia, for lack of integrated sensory-symbolic representations.
Abstract: Many commentators on the question of whether AI systems are or could be conscious have suggested that if such systems are sufficiently human-like in their interactions with people and the world, we might as well grant that they are conscious in more or less the human sense. I argue against such a conclusion, using thought experiments involving human “sensory mimics" – ones rather close to technological realizability. Such mimics have access to symbolic propositional information alone in certain modalities (such as for auditory or visual sensing), yet behave effectively as if they were fully endowed with those sensory modalities. I draw conclusions about the difference between mere symbolic situation modeling and modeling that integrates symbolic annotations with the perceptual patterns from which they are abstracted. Perceptual patterns can plausibly be viewed as time-varying vector fields with local geometries that in part determine the perceived subjective sensations (qualia). Carrying over these observations to AI agents, one can envisage agents with and without phenomenal consciousness, i.e., agents with and without integration of symbolic abstractions with perceptual patterns. This is a step towards solving the Pretty Hard Problem of consciousness.
Paper Track: Commentary
Submission Number: 75
Loading