Searle proposed a thought experiment—the details don’t concern us here—purporting to show that a computer program could pass the Turing Test, even though the program manifestly lacked anything that a reasonable person would call “intelligence” or “understanding.” In response, many critics said that Searle’s argument was deeply misleading, because it implicitly encouraged us to imagine a computer program that was simplistic in its internal operations—something like the giant lookup table described in Section 4.1. And while it was true, the critics went on, that a giant lookup table wouldn’t “truly understand” its responses, that point is also irrelevant. For the giant lookup table is a philosophical fiction anyway: something that can’t even fit in the observable universe! If we instead imagine a compact, efficient computer program passing the Turing Test, then the situation changes drastically. For now, in order to explain how the program can be so compact and efficient, we’ll need to posit that the program includes representations of abstract concepts, capacities for learning and reasoning, and all sorts of other internal furniture that we would expect to find in a mind. Personally, I find this response to Searle extremely interesting—since if correct, it suggests that the distinction between polynomial and exponential complexity has metaphysical significance. According to this response, an exponential-sized lookup table that passed the Turing Test would not be sentient (or conscious, intelligent, self-aware, etc.), but a polynomially-bounded program with exactly the same input/output behavior would be sentient. Furthermore, the latter program would be sentient because it was polynomially-bounded.
if you oppose the possibility of AI in principle, then either (i) you can take the “metaphysical route” (as Searle [113] does with the Chinese Room), conceding the possibility of a computer program passing every conceivable empirical test for intelligence, but arguing that that isn’t enough, or (ii) you can conjecture an astronomical lower bound on the resources needed either to run such a program or to write it in the first place—but here there is little question of proof for the foreseeable future. Crucially, because of the lookup-table argument, one option you do not have is to assert the flatout impossibility of a computer program passing the Turing Test, with no mention of quantitative complexity bounds.
While the details differ, what most formal accounts of knowledge have in common is that they treat an agent’s knowledge as closed under the application of various deduction rules like the ones above. In other words, agents are considered logically omniscient: if they know certain facts, then they also know all possible logical consequences of those facts. Sadly and obviously, no mortal being has ever attained or even approximated this sort of omniscience (recall the Turing quote from the beginning of Section 1)
Cobham axioms
Over the past few decades, the idea of defining complexity classes such as P and NP in “logical, machine-free” ways has given rise to an entire field called descriptive complexity theory, which has deep connections with finite model theory. While further discussion of descriptive complexity theory would take us too far afield, see the book of Immerman [77] for the definitive introduction, or Fagin [52] for a survey.
In practice formal logic is not much use, because despite some progress in the last 150 years we're still only able to formalize a small percentage of statements. We may never do that much better, for the same reason 1980s-style \knowledge representation" could never have worked; many statements may have no representation more concise than a huge, analog brain state
waterfall argument: and since everything follows laws of nature, then everything will have some description under which it behaves as if it were intelligent. But this sense of \intelligent behavior" is of no psychological relevance at all.
The waterfall argument has been criticized on numerous grounds: see Haugeland [71], Block [30], and especially Chalmers [37] (who parodied the argument by proving that a cake recipe, being merely syntactic, can never give rise to the semantic attribute of crumbliness). To my mind, though, perhaps the easiest way to demolish the waterfall argument is through computational complexity considerations. Indeed, suppose we actually wanted to use a waterfall to help us calculate chess moves. How would we do that? In complexity terms, what we want is a reduction from the chess problem to the waterfall-simulation problem. That is, we want an ecient algorithm that somehow encodes a chess position P into an initial state sP ∈ S of the waterfall, in such a way that a good move from P can be read out eciently from the waterfall's corresponding nal state, f (sP) ∈ T.35 But what would such an algorithm look like? We cannot say for sure|certainly not without detailed knowledge about f (i.e., the physics of waterfalls), as well as the means by which the S and T elements are encoded as binary strings. But for any reasonable choice, it seems overwhelmingly likely that any reduction algorithm would just solve the chess problem itself, without using the waterfall in an essential way at all! A bit more precisely, I conjecture that, given any chess-playing algorithm A that accesses a \waterfall oracle" W, there is an equally-good chess-playing algorithm A′, with similar time and space requirements, that does not access W. If this conjecture holds, then it gives us a perfectly observer-independent way to formalize our intuition that the \semantics" of waterfalls have nothing to do with chess.
The perceptive reader might suspect that we smuggled our conclusion into the assumption that the waterfall states sP ∈ S and f (sP) ∈ T were encoded as binary strings in a \reasonable" way (and not, for example, in a way that encodes the solution to the chess problem). But a crucial lesson of complexity theory is that, when we discuss \computational problems," we always make an implicit commitment about the input and output encodings anyway! So for example, if positive integers were given as input via their prime factorizations, then the factoring problem would be trivial (just apply the identity function). But who cares? If, in mathematically dening the waterfall-simulation problem, we required input and output encodings that entailed solving chess problems, then it would no longer be reasonable to call our problem (solely) a \waterfall-simulation problem" at all.
In my view, there is an important lesson here for debates about computationalism. Suppose we want to claim, for example, that a computation that plays chess is \equivalent" to some other computation that simulates a waterfall. Then our claim is only non-vacuous if it's possible to exhibit the equivalence (i.e., give the reductions) within a model of computation that isn't itself powerful enough to solve the chess or waterfall problems.