Keywords: Turing test, intelligence, AI problems, formalization, logic, hierarchical formal systems, Turing machines, hierarchical Turing machines
TL;DR: Something "intelligent" can use new information to do new things. This can be formalized as a boundary condition for "intelligence", even if "intelligence" itself cannot be. Though formalizable, it is provably undecidable.
Abstract: We define a ``hierarchical Turing machine'' to be a sequence of total Turing machines. We then prove that any Turing machine in a hierarchical Turing machine is incapable of deciding whether subsequent Turing machines can possibly reach ``novel'', states without new input. We prove this for an exceedingly broad class of hierarchical Turing machines, without invoking Rice's theorem. If we consider human-like ``intelligence'' as entailing capacity for being able to infer truly new knowledge after learning new information, this demonstrates that determining whether a machine exhibits human-like "intelligence" is undecidable. We also define the complexity class AI to be the class of problems that require "novel" insights to solve. It follows from this definition that, in the general case, it is undecidable whether any given problem is in AI. Finally, we show that hierarchical Turing machines that exhibit persistent novelty are, by their very definition, impossible to control by any "less intelligent" hierarchical Turing machine.
Confirmation: I confirm that I and my co-authors have read the policies are releasing our work under a CC-BY 4.0 license.
Submission Number: 11
Loading