Douglas Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid was awarded the Pulitzer Prize in non-fiction. The book draws inspiration from mathematical riddles such as those found within formal logic and computing procedures to explore ideas about AI with its narratives that are presented through different perspectives including those held by artist M. C. Escher or composer Johann Sebastian Bach.
Hofstadter is a renowned philosopher and author who has written extensively on complex topics such as artificial intelligence. Gödel, Escher, Bach prominently featured philosophical inquiries into the nature of life in relation to mathematics – one that would eventually be developed into a revolutionary approach for understanding how living organisms develop from physical matter without any sense or intentionality attached at all!
Intelligence is a complex, multifaceted thing – but it can be broken down into different parts. Scientists have been studying specific aspects and techniques of intelligence for years now; according to Hofstadter, these include speech synthesis & language comprehension skills as well vision abilities. It’s important not just look at how our brains work from an architectural standpoint though – because so much more arises once you add in other bodily functions like motor control or emotion recognition.
In a 1999 prologue, he pointed out an inconsistency. Computers have the impression of being rigid, unemotional, rule-following machines with no desires. Is it contradictory to design intelligent behavior into a system that is not intelligent? In other words, is the gap between intelligence and non-intelligence insurmountable?
Some believe that we can ultimately program computers to be adaptable, thinking machines by using extensive sets of formal rules. Hofstadter contends human minds may also operate under similar types and degrees as those which control the behavior in a thinking machine; however there is still much debate on this subject matter due in part from advances being made within AI languages such LISP or ELIZA – both used during communication with humans via text message exchanges where users response back what they heard without realizing it’s actually an input mode rather than listening actively waiting for responses
The upper levels must communicate with the level below them even though they are not required to be aware of what is happening there. The neural network computer model resembles the neuronal substrate of the brain in almost exact form and structure (isomorphic). Even 20 years after Hofstadter’s book, there is still no AI that can match the human brain in complexity.
Hofstadter asserts that there is no reason to believe AI won’t one day be able to express the full range of human emotions, which would include producing lovely music. While there is still a long way to go before AI can match the brain, he contends that since people follow rules just like computer programs do, there is no reason why it can’t eventually express all human emotions. A reductionistic theory of the mind must incorporate “soft” concepts like levels, mappings, and meanings in order to make sense. To prove that AI can achieve our level of understanding, Hofstadter discusses Kurt Gödel’s incompleteness theorem
How has artificial intelligence changed in the 20 years since 1999 that might have an impact on his thesis?
In his Life 3.0: Being Human in the Age of Artificial Intelligence, MIT physicist and cosmologist Max Tegmark offers an update:
“An AI’s ability to create appropriate email responses or carry on a spoken conversation will improve as it becomes more adept at linguistic prediction. This might seem like there is thought going on, at least to an observer.”
However, he also adds:
“But there is still a long way to go for AI. Even while I must admit that I get a little deflated when an AI out-translates me, I feel better after I remind myself that, at this point, it doesn’t actually understand what it’s saying. Without ever connecting these words to anything in the real world, it learns patterns and interactions involving words by being trained on enormous data sets. AI is unable of comprehending the context or significance of the symbols it manipulates.”
Tegmark doesn’t rule it out, but he doesn’t think we’ll get to the point of artificial general intelligence (AGI) that he regards to be on par with human intelligence very soon. AGI is what Hofstadter refers to as the “Holy Grail of AI,” in his opinion.
One of the most important victories in AI history occurred in 1997 when Deep Blue finally defeated Russian chess champion Garry Kasparov. But instead of playing chess like a human, Deep Blue used force. It chose the best course of action at any point in the game.
In a section of his book about the game, Hofstadter makes the claim that only a chess program with general intelligence will be able to defeat any human. Deep Blue, however, proved Hofstadter’s claim to be false.
Question: Will there be chess programs that can beat anyone?
Speculation: No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. “Do you want to play chess?” “No, I’m bored with chess. Let’s talk about poetry.” That may be the kind of dialogue you could have with a program that could beat everyone.
Hofstadter recently told the Atlantic that doesn’t give us any insight into how humans play chess.
“Deep Blue plays very good chess — so what? I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”
The greatest recent advances in AI have been made possible by these brute force techniques, which rely on powerful computers and vast amounts of data. Machine learning, a statistical technique, is advancing AI in areas like vision and language processing that have historically lagged behind. Machine learning significantly differs from the way the human brain interprets imagery and language, even though it approximately duplicates the network of connections between brain cells.
Until we discover otherwise, the most tenable philosophical position is that human consciousness is something unique. There are no signs that these limitations will ever be overcome by AI programs, and there is no evidence to support Hofstadter’s claim that humans are rule-governed in this manner. Simply put, there is no mechanistic explanation for how the human mind works.
What else did Hofstadter anticipate back when he wrote the book?
Below are some questions and speculations about A.I.
Question: Will a computer program ever write beautiful music?
Speculation: Yes, but not soon. Music is a language of emotions, and until programs have emotions as complex as ours, there is no way a program will write anything beautiful. There can be “forgeries”- shallow imitations of the syntax of earlier music-but despite what one might think at first, there is much more to musical expression than can be captured in syntactical rules. There will be no new kinds of beauty turned up for a long time by computer music-composing programs. Let me carry this thought a little further.
To think-and I have heard this suggested-that we might soon be able to command a preprogrammed mass-produced mail-order twenty-dollar desk-model “music box” to bring forth from its sterile circuitry pieces which Chopin or Bach might have written had they lived longer is a grotesque and shameful misestimation of the depth of the human spirit. A “program” which could produce music as they did would have to wander around the world on its own, fighting its way through the maze of life and feeling every moment of it. It would have to understand the joy and loneliness of a chilly night wind, the longing for a cherished hand, the inaccessibility of a distant town, the heartbreak and regeneration after a human death.
It would have to have known resignation and world-weariness, grief and despair, determination and victory, piety and awe. In it would have had to commingle such opposites as hope and fear, anguish and jubilation, serenity and suspense. Part and parcel of it would have to be a sense of grace, humor, rhythm, a sense of the unexpected-and of course an exquisite awareness of the magic of fresh creation. Therein, and therein only, lie the sources of meaning in music.
Question: Will emotions be explicitly programmed into a machine?
Speculation: No. That is ridiculous. Any direct simulation of emotions-PARRY, for example-cannot approach the complexity of human emotions, which arise indirectly from the organization of our minds. Programs or machines will acquire emotions in the same way: as by-products of their structure, of the way in which they are organized-not by direct programming. Thus, for example, nobody will write a “falling-in-Iove” subroutine, any more than they would write a “mistake-making” subroutine. “Falling in love” is a description which we attach to a complex process of a complex system; there need be no single module inside the system which is solely responsible for it, however!
Question: Could you “tune” an AI program to act like me, or like you-or halfway between us?
Speculation: No. An intelligent program will not be chameleon-like, any more than people are. I t will rely on the constancy of its memories, and will not be able to flit between personalities. The idea of changing internal parameters to “tune to a new personality” reveals a ridiculous underestimation of the complexity of personality.
Question: Will there be a “heart” to an AI program, or will it simply consist of “senseless loops and sequences of trivial operations” (in the words of Marvin Minsky6)?
Speculation: If we could see all the way to the bottom, as we can a shallow pond, we would surely see only “senseless loops and sequences of trivial operations”-and we would surely not see any “heart”. Now there are two kinds of extremist views on AI: one says that the human mind is, for fundamental and mysterious reasons, unprogrammable.
The other says that you merely need to assemble the appropriate “heuristic devices-multiple optimizers, pattern-recognition tricks, planning algebras, recursive administration procedures, and the like”,1 and you will have intelligence. I find myself somewhere in between, believing that the “pond” of an AI program will turn out to be so deep and murky that we won’t be able to peer all the way to the bottom. If we look from the top, the loops will be invisible, just as nowadays the current-carrying electrons are ‘invisible to most programmers. When we create a program that passes the Turing test, we will see a “heart” even though we know it’s not there.
Question: Will AI programs ever become “superintelligent”?
Speculation: I don’t know. It is not clear that we would be able to understand or relate to a “superintelligence”, or that the concept even makes sense. For instance, our own intelligence is tied in with our speed of thought. If our reflexes had been ten times faster or slower, we might have developed an entirely different set of concepts with which to describe the world. A creature with a radically different view of the world may simply not have many points of contact with us. I have often wondered if there could be, for instance, pieces of music which are to Bach as Bach is to folk tunes: “Bach squared”, so to speak. And would I be able to understand them? Maybe there is such music around me already, and I just don’t recognize it, just as dogs don’t understand language.
The idea of superintelligence is very strange. In any case, I don’t think of it as the aim of AI research, although if we ever do reach the level of human intelligence, superintelligence will undoubtedly be the next goal-not only for us, but for our AI-program colleagues, too, who will be equally curious about AI and superintelligence. It seems quite likely that AI programs will be extremely curious about AI in general-understandably.
Question: You seem to be saying that AI programs will be virtually identical to people, then. Won’t there be any differences?
Speculation: Probably the differences between AI programs and people will be larger than the differences between most people. It is almost impossible to imagine that the “body” in which an AI program is housed would not affect it deeply. So unless it had an amazingly faithful replica of a human body-and why should it?-it would probably have enormously different perspectives on what is important. what is interesting, etc.
Wittgenstein once made the amusing comment, “If a lion could speak, we would not understand him.” It makes me think of Rousseau’s painting of the gentle lion and the sleeping gypsy on the moonlit desert. But how does Wittgenstein know? My guess is that any AI program would, if comprehensible to us, seem pretty alien. For that reason, we will have a very hard time deciding when and if we really are dealing with an Al program, or just a “weird” program.
Question: Will we understand what intelligence and consciousness and free will and “I” are when we have made an intelligent program?
Speculation: Sort of-it all depends on what you mean by “understand”. On a gut level, each of us probably has about as good an understanding as is possible of those things, to start with. It is like listening to music. Do you really understand Bach because you have taken him apart? Or did you understand it that time you felt the exhilaration in every nerve in your body? Do we understand how the speed of light is constant in every inertial reference frame? We can do the math, but no one in the world has a truly relativistic intuition. And probably no one will ever understand the mysteries of intelligence and consciousness in an intuitive way. Each of us can understand people, and that is probably about as close as you can come.