Godel, Escher, Bach

cosmos 18th January 2017 at 11:34pm
Cognitive science

See here for a pdf of an old edition. See also preface for good summaries of key ideas. One of the main ideas: how can animate minds emerge from inanimate matter. Main answer: strange loops, which can be interpreted as the inanimate matter having meaning, through its emergent properties. Deeper features, like consciousness/self/I, emerge when this meaning becomes self-referential

  • Strange loops and tangled hierarchies. Going up a level in a hierarchy can take you to a lower level, they are more like Graphs..
    • –> good explanation on page 691, and chapter XX in general!
    • Interesting examples of applying strange loop ideas to
      • Government and Politics, Anarchy
      • Philosophy of science, Occultism. Evidence, reminds me of my musing that led to the Principle of Inclusiveness.
      • Use-mention, symbol-object. Art
      • Hypotheses on Consciousness. Top-down causation, p.709. His speculations on connections between consciousness and Godel's theorems are interesting, but to me the lack of precise definition of consciousness, makes me unable to see too deeply into it. Nice idea: Undecidability points to higher-levels...
      • Similar idea to my observer vs God-perspectives (see Philosophy), in p. 710
      • Nice ideas on Free will in p. 711
    • Self-reference. See Godel incompleteness theorems, Epiminede's paradox. See chapter XVI. Partial "self-engulfing" is not enough to create the "infinite regress", and Strange loops.
      • Self-reference in formal systems. Leon Henkin. Godel.
        • Godel sentences: assert their own unproducibility. "I am lying!"
        • Henkin sentences: assert their own producibility. "I am honest!". One can prove that they are always true! (Henkin's theorem). related to Self-assembly
      • Self-reproduction, a kind of active self-reference. What is a copy? (when are two things the same?). What is the original? Simulacra... A less stringent self-rep can also happen where the copies aren't identical, but are isomorphic, or belong to the same family.
        • Typogenetics to study Genetics. Enzymes are sets of instructions which act on DNA strands, and are themselves represented by amino acid sequences, which by the "typogenetic code" can be constructed from DNA strands themselves. This is similar to simulations of Artificial life, Artificial chemistry (Turing gases, etc.). Molecular biology. polyribosomes and two-tiered canons (p. 527). Abiogenesis
        • The type of self-reflection in typogenetics is similar to that in TNT.
      • Sufficiently-powerful, reminiscent of Turing completeness.
      • Godel's theorems and related ideas point toward an essential impossibility of perfection!! (called limitation theorems, in page 697) However, one would need to make this more formal of course.. But if a system is powerful enough to be able to represent itself, then one can do bad things to it... ""To seek self-knowledge is to embark on a journey which ... will always be incomplete, cannot be charted on any map, will never halt, cannot be described"
      • Self-ref in Developmental biology too: Feedback in Gene expression regulation, and maybe strange loops in Cell specialization and Morphogenesis?
  • Formal system
  • Thinking inside and outside formal systems.. Machine, Intelligent, and Un- mode (Zen mode; aka MU-mode, free-mode)
    • Humans, unlike most machines we have built, are introspectively observing what they themselves do. See page 26. Self-awareness
    • Jumping out of the system. See Limits and infinity for the infinity of infinities of infinities, etc.. We can keep jumping out, as long as we give the new thing a name, we can apply the same pattern. Godel incompleteness theorems apply equally well to the extended systems. One can even find the pattern and define an axiom schema, and the same trick applies! It is similar to Cantor's Diagonal argument, in that it can't be avoided, it is an essential incompleteness
    • Arguments to why Godel incompleteness theorems show the limitations of Intelligence in general, not our superiority to computers. Related to theorem for Ordinal numbers. Irregularities, and irregularities in the irregularities, etc.!
    • Arguments about why transcending one's system is impossible. However, under some interpretations, is possible. I think it's better interpreted as augmenting oneself, but transcending it a good word. The idea is all about how one is willing to interpret by "the system", as being able to change or not, by definition!
    • More on I-Mode and M-Mode on page 613. AI should be able to work on I-mode, and "jump out" of systems.
  • Do words and thoughts follow formal rules, or do they not?
  • Formal systems and semanticsSemantics: meaningfull vs meaningless interpretations
    • Formal systems can have more than one meaningfull interpreation
    • Limits in the decidability of Formal systems. See chapter III, page 72. Not all recursivelly enumerable sets are recursive <> i.e. there exists a Formal language (set of strings), which can be generated by a Formal system, whose complement can't be generated by a formal system. This implies that there exist formal systems for wchich there is no typographical (computable) decision procedure. Figure and ground
    • If you add or changes rules of the system, it is not the same system, so meaningfull interpreations may become meaningless For example, Geometry.
    • Meaning of consistency. Dependent on interpretation of formal system. Page 94. Hypothetical worlds; is it possible to think an impossible thought? Internal vs external consistency, see also page 99.
  • More on Meaning, as isomorphism (Functions) on chapter VI. Analogy with Genotype-phenotype maps. "To put it as succintly as possible, one view says that in order for DNA to have meaning, chemical context is necessary; the other view says that only intelligence is necessary to reveal the "intrinsic meaning" of a strand of DNA.". Example from ancient language deciphering. Generally, we can say: meaning is part of an object to the extent that it acts upon intelligence in a predictable way.
    • To understand the inner message is to have extracted the meaning intended by the sender..
    • To understand the frame message is to recognize the need for a decoding-mechanism. Frame messages often take the form of Aperiodic crystals, or their analogues.
    • To understand the outer message is to build, or know how to build, the correct decoding mechanism for the inner message.
    • There are often many levels of these kinds of messages. Meaning is deconstructed in layers of meaning extraction / mappings. See the paper on why deep and cheap learning works on Deep learning theory. Deeper levels of meaning extraction (approaching the frame and outer levels (which are said to give triggers, instead of explicit meaning) are being researched, I think they are called progressive neural nets). Decoding the outer message is a fallible enterprise, just like Intelligence (see Dual process theory video).
    • Universal Language for intelligences... Machine learning may be discovering it/... Relativity...
  • "What is Consciousness?". It will be the unraveling of the nature of the isomorphisms that underlies Meaning that will be the key element in answering this question. See about symbols, below
  • Truth vs theoremhood.., how we think vs standard machines. page 86-87
  • Theoretical computer science
    • Recursion. Loops, subroutines. Things which are the same in a certain sense, but different in other sense, taking part in some Structure, like a simple recursive structure or a loop. Often one just wraps them in functions. These are prevalent in computer science. He also describes ideas similar to Godel machines, programs which can modify themselves as probably being key for intelligence.
    • More on recursion on chap XIII: BlooP, FlooP, GlooP. Computability theory. Types of recursion:
      • Primitively recursive predicates are statements (e.g. in number theory) which can be found to be either true or not in a predictably bounded number of steps (a function that can be computed like this is also called primitively recursive). This is implemented in a programming language BlooP (which allows only bounded loops). Ganto's ax: It turns out that a system that represents all primitively recursive truths is "sufficiently powerful", and then Godel's incompleteness theorems applies (see p. 407). A Formal system is said to represent a set of predicates, when all the true instances of the predicates are theorems, and all false instances are non-theorems (see p. 417). TNT represents all primitively recursive truths about number theory. Can show that there are functions which are not primitively recursive, by a diagonalization argument.
      • Diagonal arguments explained in p. 420-424. Related to Godel's and Turing's arguments.
      • To make system more powerful, allow free (without imposed bounds) loops. MU-loops. Now we have the Halting problem. No program, in BlooP, or FlooP, can test whether a FlooP program terminates on all its inputs.
      • More on Computability theory:
        • General recursive: computable by terminating FlooP programs (programs in Turing complete languages). Often refered to as simply recursive
        • Partial recursive: computable only by nonterminating Floop programs.
      • Church-Turing thesis
    • Church's theorem: There is no infallible method for telling theorems of TNT from nontheorems. Also Tarki's theorem
  • When are two things the same? Come at it from many angles, and at the end see how simply it is connected with the nature of Intelligence. Appears in
  • Zen
    • Koans -> free Mind, of Logic and structure..
    • Enlightment: transcending dualism.
    • a major part of Zen is the fight against reliance on words
  • Levels of description. See Complex systems, Emergence, Renormalization. Key problem in Artificial intelligence. Top-down causation. Multi-level descriptions, and the problems when descriptions are similar.
    • Machine code, assembly, Assemblers, Compilers, Interpreter
    • To suggest ways of reconciling the software of mind with the hardware of brain is a main goal of this book.
    • Comment on "computers can only do what you tell them to do", and problems with this phrase, on page 306. tl:dr: we don't know what we tell them to do, when we program at higher levels.
  • MU. Unasking the question. Asserting that there is a larger context where both possibilities fit. Very Zen.
  • Intelligence
  • Neuroscience, and Cognitive science, brain and Thought. How can the brain give rise to concepts/thought? Mapping from brain to thoughts is a key question of the book
    • Brain has active symbols, instead of passive typographical ones. AugMath
    • Calculus of descriptions (p. 338) is poised as key in working with concepts in thought. It is said to be intensional and not extensional, as it need not be anchored down to specific, known objects. This is what gives it its flexibility, the ability to manufacture and manipulate symbols and descriptions that may be hypothetical..
    • Funneling/chunking in neural networks during Perception
    • How/where are concepts represented in the brain?. Many unknowns. Are they more like hardware, or software? Whats the difference? The difference is a bit semantical really, but explained well in p 356-357. The software idea is very similar to the idea that concepts are represented as Polychronous neural groups, or some neural groups with certain firing rates.
    • Symbols. Awake (activated) or dormant symbols. When symbols are activated, they activate other symbols. What is the size of the concepts that are stored in symbols? Probably concepts of the size of words.. But the problem of counting/identifying the symbols is hard. Sometimes concepts that appear to be a combination of symbols, seem to become a symbol themselves, if that combination is used often enough.
    • Classes and instance. There is also generality in the specific. Prototype principle: the most specific event can serve as a general example of (generate) a class of events.
    • Imagining, simulation, intuitive physics (see Josh Tenenbaum, Program induction, Probabilistic programming). Declarative knowledge, vs Procedural knowledge). See p 363-4.
    • Partial isomorphisms (comparisons) between people's minds at symbol level. How do you represent the conceptual structure in a mind? When would they be similar? These comparisons should reveal the common cores in human minds. This is explored through a geoegraphical map analogy. Relations with Translation. etherware, p 381
    • Thoughts as paths through concept-space. Knowledge and Beliefs are easily traversed paths. Can have falsities, dreams, surreal worlds as paths that are much more contrived and have to be guided. A chunked description of a brain state will consist of a probabilistic catalogue, in which are listed those beliefs which are most likely to be induced (and those symbols which are most likely to be activated) by various sets of "reasonably likely" circumstances, themselves described on a chunked level.
    • Consciousness. Our alternative to the soulist explanation-and a disconcerting one it is, too- is to stop at ohe symbol level and say, "This is it-this is what consciousness is. Consciousness is that property of a system that arises whenever there exist symbols in the system which obey triggering patterns somewhat like the ones described in the past several sections." Put so starkly, this may seem inadequate. How does it account for the sense of "I", the sense of' self?
    • I is a symbol itself, actually it's considered a subsystem. A little subbrain, that reflects/models/creates chunked descriptions of the whole brain/Mind.
  • Philosophy of mind (chapter XVII, p. 559). AI. every aspect of thinking can be viewed as a high-level description of a system which, on a low level, is governed by simple, even formal, rules. "Informal system" gotten by Coarse-graining the formal system at the bottom (which btw is actually probabilistic in the real brain!)
    • Church-Turing thesis. p. 568. If by high-level he means conscious levels, I would probably agree, otherwise, I'm not sure his isomorphism is very insightfull. Actually he did mean the conscious level!
    • Artificial intuition requires "uninterpretable" levels. Artificial neural networks and Deep learning do this!. See Dual process theory. page 570 and around. We must simulate the substrate (like neural nets) because that is the level on which the system is formal, and thus simulatable. At the above level, the system is informal. Relation to coarse-graining and emergence. I think what happens is a combination of: (1) at the low level the computational rules are easier, and thus easy to simulate, even if there are more detail; (2) At the high level, the rules may be too probabilistic to capture the intelligent behaviour in enough detail anyways. Now, at the time of publication of GEB, AI researchers understood this, but thought that neural simulation was not necessary, as other substrates may be better/easier to simulate. This may well be true, but today's most successful systems are taking considerable inspiration from biological neurons (but not totally, e.g. they often ignore spikes)
    • Critique of the Soulist position. The irrational and rational can coexist on different levels (p. 575). No reason to suppose the brain doesn't follow physics, faultlessly, and that the variety of behaviour, including mostly irrational ones, are just emergent on higher levels (what he calls epiphenomena). This, I think, is the most likely explanation of brain behaviour. The point-a point which has been made several times earlier in various contexts-is simply that meaning can exist on two or more different levels of a symbol-handling system, and along with meaning, rightness and wrongness can exist on all those levels. The brain is rational; the mind may not be.
    • He hypothesizes a unity of Intelligences (p. 579, and the chapter on Meaning in language), but I'm not convinced about this. Although it could certainly be the case, I think it could also be that there could be a big diversity of possible intelligences. Reading on, I think the commonality he refers to is a "common core" of intelligence, but he does allow for a large variety in later comments.
    • More on Meaning, and its relation to Computability theory. Also Beauty
  • Artificial intelligence. Nice application of AI to maths (p. 614)
    • Turing test (immitation game with machines) as an Operational definition of intelligence.. Immitation game is like Generative adversarial networks!
    • Reinforcement learning ideas and recursion on page 604
    • When is a program original. P 606.
    • Problem reduction strategy.
    • Changing the problem space. Intelligent systems and search. Exploration/tentativity
    • Knowledge representation as the crux of AI; c.f. Knowledge management. "Many of the examples above have been cited in order to stress that the way a domain is represented has a huge bearing on how that domain is "understood"". More on declarative vs procedural knowledge (p.616) (see also above on Neuroscience (chapter on brain and thought)) Modularity of knowledge
    • Imagery, imagination, crucial for Understanding, need to model world.
    • Grammars. Johann Amos Comenius (1633). Language where falsities are inexpressible, or at least ungrammatical
    • In SHRDLU, there is a concurrent interplay between syntactic parsing, semantic analysis, and deduction using real world knowledge. This reminds me of the importance of feedback in the brain! (p.630-1). His idea is an example of Program synthesis instructed by Natural language understanding.
    • Counterfactuals and sameness ( (chapter XIX). The "almost" lies in the mind, not in the external facts. Closeness of stories/events, etc. I think this is very much related to our ability to simulate/Model the world.
    • Layers of stability (similar to an idea I had, with nested contexts, which are considered more and more variable; p. 643-4). These are called frames in cognitive science. Cosmos can be considered a super-frame.
    • Pattern recognition. Discussion of Features in p. 647. Bongard problems.
    • Message-passing languages. Smalltalk. p 662. frame + actor = symbol. demons, subroutines lying there, waiting to be triggered. Society of mind.
    • Abstraction, creation of conceptual skeletons (abstractions) from instances. use them to make Analogyes (p. 669). Ports of access, conceptual partitions (p.671)
    • Some speculations on p 676; I only agree with some of them, though I like his comment on Superintelligence, and on Embodied cognition

G(n)=n+G(G(n1))2G(n) = \lfloor\frac{n+G(G(n-1))}{2}\rfloor

chapter V