Table of Contents
Religion is not the antithesis of technology – in many ways, it is the prelude to it
David F. Noble seeks out to illustrate a counter-intuitive fact. The greatest scientific minds of the past, and present, have religious beliefs, it is not that their religious beliefs are coincidental, but that they form the very basis for their pursuit of science and technology in the first place – not in a metaphorical but in a literal sense. From Boyle, to Descartes, to Boole, to Newton, to Francis Collins – religion has been at the heart of the deepest scientific questions mankind has ever asked..
In The Religion of Technology, Noble tries to tell us why. In the U.S, the connection between religion and technology has something to do with the expectation of the return of Jesus Christ.
Technology as deliverance
The distinctive feature of the modern age concerning our relationship to technology, is that we are no longer content with the comfort and survival they bring to our lives, but that we seek deliverance from it.
Noble believes that such a development is a consequence of a myth we have consciously or unconsciously constructed. But Noble makes a distinction between two classes of religious scientists, those who believe that the science is a way of discovering clues about the mind of God, and those who believe that it is our duty, as human beings, to enact the divine plan here on earth.
Many religious scholars have warned about attachment to technology. To them, technology is merely a tool to make the lives of human beings more comfortable, but nonetheless, it exists for man in his fallen state. A second group of religious thinkers, more engaged with the useful arts, are of the opinion that technology is mankind’s salvation. Noble’s thesis throughout the book, is that we are, whether we like or not, enacting the beliefs of the second group of people. And with such an enactment comes dangerous consequences.
Fallen technological man
“Quite apart from those supernatural arts of living in virtue and of reaching immortal beatitude which nothing but the grace of God which is in Christ can communicate to the sons of promise and heirs of the kingdom, “there have been discovered and perfected, by the natural genius of man, innumerable arts and skills which minister not only to the necessities of life but also to human enjoyment.” – Augustine, The City of God
Augustine recognized the “astonishing achievements” that had taken place in cloth-making, navigation, architecture, agriculture, ceramics, medicine, weaponry and fortification, animal husbandry, and food preparation; in mathematics, astronomy, and philosophy; as well as in language, writing, music, theater, painting, and sculpture. But he emphasized again that “in saying this, of course, I am thinking only of the nature of the human mind as a glory of this mortal life, not of faith and the way of truth that leads to eternal life.… And, remember, all these favors taken together are but the fragmentary solace allowed us in a life condemned to misery.”
Here, Noble contrasts the views of Augustine with more recent techno-religious enthusiasts, who are working towards a paradise on earth. Jacques Ellul made a similar point to Augustine, when he said that technology existed for mankind it is fallen state, and had no other significance. In this view, technology had nothing to do with transcendence – it signified the denial of transcendence.
Transcendence, the recovery of lost perfection, could be gained only by the grace of God alone. Moreover, those so blessed, said Augustine, would partake in a “universal knowledge” far beyond the ken of mere mortals. “Think how great, how beautiful, how certain, how unerring, how easily acquired this knowledge then will be. And what a body, too, we shall have, a body utterly subject to our spirit and one so kept alive by spirit that there will be no need of any other food.” In the early Middle Ages, for reasons that remain obscure, the relationship between technology and transcendence began to change.
In the Book of Revelation (the last book of the Bible), there is a prophecy that foretells a thousand-year reign on earth of the returned Messiah, Christ, with an elite corps of the saintly elect. It is here that the fate of the Fall is reversed and the curse is lifted. The redeemed mankind if permitted to return to paradise, eat from the tree of life, and regain Adam’s original perfection, immortality, and godliness.
Millenarianism
In the early centuries of the Christian era, there were myriad millenarian voices heralding the imminent advent of the Kingdom of God, which drew their inspiration from biblical prophecy and mystical vision. But these voices were soon marginalized by the clerical caste, which embodied the power and authority of the Great Church. In the view of this emergent elite, the millennium had already begun with the establishment of the Church and they were the earthly saints. In their eyes, belief in a millennium yet to come was subversive, because it suggested that the Kingdom of God had not yet arrived but belonged to a future time beyond the Church.
Despite official condemnation, belief in a future millennium continued to flourish, mostly as an expression of popular desperation and dissent. The medieval ecclesiastical elite neither offered nor harbored hope of an earthly paradise beyond the Church. In the high Middle Ages, however, in the wake of religious revival, a rigorist Church-reform movement, the Crusades, and renewed external threats to Christendom, millenarianism regained a degree of elite respectability, especially among the new religious orders, which made use of apocalyptic mythology to validate their identity and destiny, and thereby magnify their significance.
Bacon
Bacon thus sought to close the gap between technology and philosophy, noting scornfully that among philosophers “it is esteemed a kind of dishonor unto learning to descend to inquiry or meditation upon matters mechanical.” Toward this end, he insisted that philosophers must overcome their elite disdain for the useful arts, and learn to deal with “things themselves,” “mean and even filthy things,” in order better to appreciate their value and appropriate their fruits.
In his defense of the worthiness of the useful arts, Bacon forcefully reasserted the tradition begun long before by Erigena, Hugh of St. Victor, and Roger Bacon, and sustained, most recently, by Paracelsus, Bruno, and the Rosicrucians.
“It is not the pleasure of curiosity, nor the quiet of resolution, nor the raising of the spirit nor victory of wit, nor faculty of speech, nor lucre of profession, nor ambition of honor or fame, nor enablement of business, that are the true ends of knowledge,” Bacon insisted in Valerius Terminus, “but it is a restitution and reinvesting (in great part) of man to the sovereignty and power (for whensoever he shall be able to call creatures by their true names he shall again command them) which he had in his first state of creation.” “We are agreed, my sons, that you are men,” Bacon wrote in his “Refutation of Philosophies.” “That means, as I think, that you are not animals on their hind legs, but mortal gods.”
Bacon’s bold biblically inspired vision reflected the exaggerated anthropocentric assumptions of his seventeenth-century Protestant faith, the conviction that “human ascendancy was central to the Divine plan.”
“Man, if we look to final causes, may be regarded as the centre of the world insomuch that if man were taken away from the world, the rest would seem to be all astray, without aim or purpose,” Bacon wrote. In the same scriptural spirit, he counseled humility in the pursuit of knowledge and power, lest mankind repeat the sin of Adam, but he defended his grandiose enterprise by insisting that “it was not that pure and unspotted natural knowledge whereby Adam gave names to things agreeable to their natures which caused the Fall, but an ambitious and authoritative desire of moral knowledge, to judge of good and evil, which makes men revolt from God.” Like Roger Bacon before him, Francis Bacon maintained that the biblical accounts of Noah, Moses, and Solomon, as well as the history of the useful arts, offered sufficient evidence for the belief that the restoration of mankind’s original powers was part of the divine plan.
Boyle
Boyle learned Hebrew and other ancient languages to read God’s words in their original expression. Likewise, as a natural scientist he delved directly into God’s work—in the original, as it were—in an equally devout effort to come closer to his Creator. “It is the glory and prerogative of man,” Boyle wrote, “that God was pleased to make him, not after the world’s image, but his own.” Thus, he urged, mankind must “look upon ourselves as belonging unto God.” Boyle believed that this privileged relationship to God was especially embodied in the scientist, “born the priest of nature,” whose “inquiry mediates between God and Creation.” And he was convinced that, because of their great learning and devotion, the scientific virtuosi would ultimately, in the millennium, “have a far greater knowledge of God’s wonderful universe than Adam himself could have had.”
Newton
Though Boyle no doubt identified himself, and was certainly viewed by his many acolytes, as the very model of the saintly virtuoso, the new transcendent ideal was more fully realized in the Godlike persona of Isaac Newton. Austere, ascetic, and aloof, Newton spent his entire life seeking some intimate understanding of his Creator. Like Boyle, Newton studied ancient languages better to understand the true meaning of Scripture. A fervent millenarian, he devoted a lifetime to the interpretation of prophecy, producing four separate commentaries on Daniel and Revelation.
In Joachimite fashion, he believed that he “could prove, point by point, that everything foretold in the prophetic books had actually taken place, that the correspondence between prophecy and recorded history had been perfect.” In a treatise on “The End of the World, Day of Judgement, and World to Come,” he speculated about what the millennium and the Kingdom of Heaven would be like, while he privately calculated the time of the second coming. Born on Christmas Day, he believed himself a messiah and a prophet (a status still accorded him by Seventh-Day Adventists) and wrote that the “Sons of the Resurrection” would have bodies like that of Christ, “with more than a touch of self-assurance that he would be among [them].”
Whereas Boyle began his career emphasizing the usefulness of experimental natural philosophy only to argue later that “patient study was likely to enable man to gain a far larger share of his patrimony than aiming at immediate usefulness,” Newton from the start “displayed sovereign indifference to the practical usages of science”; throughout his life his scientific efforts to discern the operating laws of nature were “directed almost exclusively to the knowledge of God.” Newton’s religious beliefs encouraged him “to search for divine efficacy in every aspect of the material order.” For Newton, then, to uncover the hidden logic of the universe was to understand, and in that sense identify with, the mind of its Creator.
To the deeply religious mind of modern science, beginning with Boyle and Newton (and also Galileo), the twin conceptions of “the divine transcendence of the creator-maker and the transcendence of man as knower reinforced each other.” Henceforth nature was to be understood by the way it was made, which required of the scientist a God-like posture and perspective. But divine knowledge of creation was not all. Some aimed even higher, seeking not merely to know creation as it was made but also to make it themselves, actually to participate in creation and hence know it firsthand.
In the sixteenth century, inventors and mechanics had increasingly invoked the image of God as craftsman and architect in order, by analogy, to lend prestige to their own activities: in their humble arts, they were imitating God and hence reflecting his glory. In the seventeenth century, the scientists began to carry this artisanal analogy between the works of man and God somewhat further, toward a real identity between them. Again, as Milton had written, they strove to know God not just in order to love and imitate him, but also “to be like him.”
God was not only creator, but re-creator, correcting for its corruption by man. In the Joachimite millenarian scheme, man became through history a participant in his own redemption, and hence in the reconstruction of creation; through his mortal efforts, God completed his work.
NASA
Through the end of the Apollo and Skylab programs, 90 percent of the men chosen to be astronauts had been active Christians, and of these 85 percent belonged to Protestant denominations. “
Noble recounts the stories of several NASA engineers who profess to a faith in God, noting that atheistic astronauts were the exception.
Descartes
Like that of Kepler, René Descartes’s dream long inspired much reflection and anticipation but had to wait three centuries for its fulfillment. Like Kepler, Descartes perceived the mind as mankind’s heavenly endowment and, in its essence, distinct from the body, the burden of mortality. “The first thing one can know with certainty,” Descartes wrote in a letter, is that man “is a being or substance which is not at all corporeal, whose nature is solely to think.” The body, on the other hand, reflected mankind’s “epistemological fallenness” rather than its divinity, and stands “opposed to reason.”
Impediments to pure thought, the body’s senses and passions deceive and disturb the intellect. “The body is always a hindrance to the mind in its thinking,” Descartes argued, which is “contradicted by the many preconceptions of our senses.” In the wake of Copernicus and Galileo, Descartes was keenly aware that mere sense-perception could not provide a true scientific understanding of the universe and might indeed retard such understanding.
“If it were taken out of the prison of the body, it would find [these ideas] within itself.” He thus proposed a new regime for the intellect, a set of “rules for the mind,” designed to cleanse it of bodily impurity and make way for the clear and distinct ideas which humans shared with God. (Like many of his contemporaries, such as Bacon, Comenius, Wilkins, and Glanvill, Descartes dreamed also of a universal language based upon such precise concepts—a restoration of the prelapsarian, pre-Babel language of Adam—which would help overcome the confusion and conflict engendered by miscommunication.)4
Boole
In 1833, at the age of seventeen, the mathematician George Boole had what he described as a “mystical” experience. “The thought flashed upon him suddenly one afternoon as he was walking across a field [that] his main ambition in life was to explain the logic of human thought and to delve analytically into the spiritual aspects of man’s nature [through] the expression of logical relations in symbolic or algebraic form.”
An intensely religious man (like Newton, an Anglican with Unitarian tendencies), Boole had originally intended to join the clergy, but the death of his father compelled him instead to seek employment as a teacher. Like Descartes, Boole believed that human thought was mankind’s link with the divine and that a mathematical description of human mental processes was therefore at the same time a revelation of the mind of God. “We are not to regard Truth as the mere creature of the human intellect,” he argued. “The great results of science, and the primal truths of religion and of morals, have an existence quite independent of our faculties and of our recognition.… It is given to us to discover Truth—we are permitted to comprehend it; but its sole origin is in the will or the character of the Creator, and this is the real connecting link between science and religion.
In Cartesian terms, the development of a thinking machine was aimed at rescuing the immortal mind from its mortal prison.. At first the effort to design a thinking machine was aimed at merely replicating human thought. But almost at once sights were raised, with the hope of mechanically surpassing human thought by creating a “super intelligence,” beyond human capabilities. Then the prospect of an immortal mind able to teach itself new tricks gave rise to the vision of a new artificial species which would supersede Homo sapiens altogether. Totally freed from the human body, the human person, and the human species, the immortal mind could evolve independently into ever higher forms of artificial life, reunited at last with its origin, the mind of God.
Turing
Among the first persons to imagine the possibility of such a thinking machine were the American electrical engineer Claude Shannon and the English mathematician Alan Turing, who together developed the theoretical basis for both the design of electronic computers and the subsequent development of Artificial Intelligence.
The operation of the so-called Turing machine was based upon the establishment of a precise relationship between the binary arithmetic of the “machine” and a higher-level symbolic notation, which could be used to simulate thought—an analogy, that is, between the states of the machine and the states of mind.
“One may hope that this process will be more expeditious than evolution,” Turing wrote. “The survival of the fittest is a slow method of measuring advantages. The experimenter, by the exercise of intelligence [in machine design], should be able to speed it up.
Turing mockingly dismissed concern about irreverently usurping divine powers or denigrating the crown of creation. Yet his ironic rejoinders reflect the persistence of deep-seated cultural preoccupations. In designing such machines, as in conceiving children, Turing observed, “we are … instruments of His will providing mansions for the souls He creates.” “Consolation would be more appropriate” in response to those fearful of jeopardizing mankind’s privileged position, he wrote:
“perhaps this should be sought in the transmigration of souls”—the transfer, that is, of the souls of men to their machines .Shortly before he apparently took his own life by eating a cyanide-laced apple, Turing sent four last postcards—“Messages from the Unseen World,” he called them—to a friend, which contained cryptic reference to a perhaps abiding faith, despite his fashionable atheism.
The first card was lost. On the second he wrote, “The Universe is the Interior of the Light Cone of the Creation,” referring to the cosmological theories of Einstein. “Science is a Differential Equation, Religion is a Boundary Condition,” he wrote on the third. On the last, the message in verse was more extended and evocative of ancient belief:
“Hyperboloids of wondrous Light,
/Rolling for age through Space and Time;
/Harbour those waves which somehow might,
/Play out God’s holy pantomime.”
According to the official creation myth of the Artificial Intelligentsia, AI as a self-conscious technological project was launched in 1956. After they had programmed a digital computer to express symbols in SAGE simulations and chess-playing, Newell and Simon, together with J. C. Shaw, formulated their radically reductionist notion of “information processing systems,” and, on that theoretical basis, proceeded laboriously to write programs for their computer which would simulate human thought. (Linguistic theorist Umberto Eco has suggested that AI computer languages are “heirs of the ancient search for the perfect language,” the pre-Babel universal language of Adam.)
“The basic point of view inhabiting our work has been that the programmed computer and human problem solver are both species belonging to the genus information processing system,” Newell and Simon wrote. “The vagueness that has plagued the theory of higher mental processes and other parts of psychology disappears when the phenomena are described as programs.”
In this spirit, they developed their Logic Theorist program, designed to prove automatically theorems taken from Russell and Whitehead’s Principia Mathematica— often described as the first actual demonstration of Artificial Intelligence. The first machine proof of a theorem was accomplished in the summer of 1956, after which Simon excitedly wrote to Bertrand Russell about it. Russell replied sardonically: “I am delighted to know that Principia Mathematica can now be done by machinery. I wish Whitehead and I had known of this possibility before we both wasted ten years doing it by hand.… I am delighted by your example of the superiority of your machine to Whitehead and me.”
DARPA
Minsky became the premier promoter of Artificial Intelligence. His intentionally provocative denigrations of human mental anatomy and ability gained him widespread notoriety, as did his extravagant exaggerations of AI advances. Beyond his propagandistic motives, his pronouncements showed a deep disdain for mere mortality and an impatience for something more. Minsky described the human brain as nothing more than a “meat machine” and regarded the body, that “bloody mess of organic matter,” as a “teleoperator for the brain.”
Both, he insisted, were eminently replaceable by machinery. What is important about life, Minsky argued, is “mind,” which he defined in terms of “structure and subroutines”—that is, programming. Like Descartes, he insisted that the mind could and should be divorced from both the body and the self. “The important thing in refining one’s own thought,” Minsky maintained, “is to try to depersonalize your interior.”
In the short term, Minsky prophesied at the Dartmouth conference, man-machine symbiosis would become the major manifestation of Artificial Intelligence, long before the advent of truly autonomous thinking machines capable of evolutionary advance. Time-sharing computers, he argued, will enable us “to match human beings in real time with really large machines,” rendering the machines practical “thinking aids.”
“In the years to come, we expect that these man-machine systems will share, and perhaps for a time be dominant, in our advance toward the development of Artificial Intelligence.”
Accordingly, the U.S. Air Force sought to use high-speed computers to “amplify” or “accelerate” human cognitive processes, in order to bring pilots “up to speed” and thereby ensure optimal use of their high-performance aircraft; the F14 jet fighter, for example, required split-second pilot responses to a rapid, continuous flow of computer-generated information. The human component of the weapons system thus had to be fitted for “real time interactivity” through the computer-based “augmentation of human intellect.” Air Force research into human-machine symbiosis, the so-called pilot-associate project, included studies of voice-actuated computers, computers that respond to the pilot’s eye movements, the control of computers by brain waves (known as “controlling by thinking”), and even the direct “hard-wiring” of pilots to computers.
The religious rapture of cyberspace was perhaps best conveyed by Michael Benedikt, president of Mental Tech, Inc., a software-design company in Austin, Texas. Editor of an influential anthology on cyberspace, Benedikt argued that cyberspace is the electronic equivalent of the imagined spiritual realms of religion. The “almost irrational enthusiasm” for virtual reality, he observed, fulfills the need “to dwell empowered or enlightened on other, mythic, planes.”
Religions are fueled by the “resentment we feel for our bodies’ cloddishness, limitations, and final treachery, their mortality. Reality is death. If only we could, we would wander the earth and never leave home; we would enjoy triumphs without risks and eat of the Tree and not be punished, consort daily with angels, enter heaven now and not die.”
The link between virtual reality and the religious rejection of the corporeal body is interesting. There is a sense in which the internet has allowed mankind to transcend its physical and even mental limitations. With the internet, geographical location is irrelevant. Neither is memory. Even time is compressed by the internet by making your search for information so much more efficient. And there is undeniably the feeling of bodily transcendence, a loss of identification with time, when one is engaged in a cyberspace activity, through gaming or a million other things.
“Doesn’t the materialist view of the mind contradict the existence of an immortal soul?” he asked, and insisted that both the Old and New Testaments imply “that Judeo-Christian tradition is not inconsistent with … bodily resurrection in the afterlife. It is certain that some kind of support would be required for the information and organization that constitutes our minds,” Crevier acknowledged, a material, mechanical replacement for the mortal body. But “religious beliefs, and particularly the belief in survival after death, are not incompatible with the idea that the mind emerges from physical phenomena.” Christ was resurrected in a new body; why not a machine?
Crevier recounted the discussions of such a possibility that began to surface on the AI grapevine in the 1980s, in particular the idea of “downloading” the mind into a machine, the transfer of the human mind to an “artificial neural net” through the “eventual replacement of brain cells by electronic circuits and identical input-output functions.” “This (so far) imaginary process strongly suggests the possibility of transferring a mind from one support to another,” and hence the survival of the “soul” after death in a new, more durable, medium. “This gradual transition from carnal existence to embodiment into electronic hardware would guarantee the continuity of an individual’s subjective experience” beyond death.
Moravec
The chief prophet of such “postbiological” computer-based immortality was Hans Moravec, a Stanford-trained AI specialist who joined the faculty at Carnegie Mellon and developed advanced robots for the military and NASA. In 1988, his visionary Mind Children described in detail how humans would pass their divine mental inheritance on to their mechanical offspring.
Moravec lamented the fact that the immortal mind was tethered to a mortal body, and that “the uneasy truce between mind and body breaks down completely as life ends [when] too many hard-earned aspects of our mental existence simply die with us.” But, he exclaimed, “it is easy to imagine human thought freed from bondage to a mortal body.”
Moravec described the surgical procedure involved in such a transfer, which entailed linking the neural bundles of the brain to cables connected to the computer. (Crevier found his description “convincing.”) “In time, as your original brain faded away with age, the computer would smoothly assume the lost functions. Ultimately your brain would die, and your mind would find itself entirely in the computer.… With enough widely dispersed copies, your permanent death would be highly unlikely.” (The same replication procedure would also make possible resurrection, since “the ability to transplant minds will make it easy to bring to life anyone who has been carefully recorded on a storage medium.”)
Thus, in Moravec’s view, the advent of intelligent machines will provide humanity with “personal immortality by mind transplant,” a sure “defense against the wanton loss of knowledge and function that is the worst aspect of personal death.”
Among the members of the AI community, such longings are common. “We’re a symbiotic relationship between two essentially different kinds of things,” observed Danny Hillis, an AI disciple of Marvin Minsky at MIT, designer of the Connection Machine, a parallel-processing supercomputer, and cofounder and CEO of Thinking Machines, Inc. “We’re the metabolic thing, which is the monkey that walks around, and we’re the intelligent thing, which is a set of ideas and culture. And those two things have coevolved together, because they helped each other. But they’re fundamentally different things. What’s valuable about us, what’s good about humans, is the idea thing. It’s not the animal thing.”
Gene Therapy
The English physicist Crick (together with Rosalind Franklin and Maurice Wilkins of London’s King’s College and some of his own colleagues from the Cavendish Laboratory at Cambridge) brought the technology of X-ray crystallography to bear upon “the heart of a profound insight into the nature of life itself.” In eighteen months they had discerned the double-helix structure of DNA and fathomed the physical mechanism of inheritance. “We have found the secret of life,” Crick exclaimed. (“And now the announcement of Watson and Crick about DNA,” said Salvador Dalí, in words later used by Crick as the epigraph for his Of Molecules and Man. “This is for me the real proof of the existence of God.”)
In an understated way, Watson concurred: “DNA molecules, once synthesized, are very, very stable,” he noted. “The idea of the genes’ being immortal smelled right.” “The double helix has replaced the cross in the biological analphabet,” said pioneer nucleic-acid chemist Erwin Chargaff. In time, as Dorothy Nelkin observed, this view of DNA as an eternal and hence sacred life-defining substance—indeed, the new material basis for the immortality and resurrection of the soul—became a modern article of faith. DNA spelled God, and the scientists’ knowledge of DNA was a mark of their divinity.
Once the new science had given rise to an adequate technology, biology, the study of life, would increasingly become at the same time biotechnology, the engineering of life. By 1970, having outlined the genetically controlled processes of the cell in single-celled organisms like bacteria, the molecular biologists began to move on to the larger and infinitely more complex cells of higher organisms. They learned how to isolate such cells, taken from embryos, for investigation.
After an intense lobbying effort by Walter Gilbert, James Watson, Charles Cantor, Leroy Hood, and other leading figures in genetic engineering, together with lobbyists from pharmaceutical and biotechnology companies, the Human Genome Project was established by the U.S. government. It went into formal operation in 1990, under the direction of Watson. Estimated eventually to cost three billion dollars, the project was the largest engineering undertaking since NASA’s Apollo Project. In 1993, in a dispute over the patenting of human genes (which Watson opposed), Watson resigned and was succeeded by University of Michigan medical geneticist Francis Collins, a Gilbert protégé, who had contributed significantly to the identification of the genes for cystic fibrosis, neurofibromatosis, and Huntington’s disease.
“There is a greater degree of urgency among older scientists than among younger ones to do the human genome now,” wrote Watson. “The younger scientists can work on their grants until they are bored and still get the genome before they die. But to me it is crucial that we get the human genome now rather than twenty years from now, because I might be dead then and I don’t want to miss out on learning how life works.” “This is truly the golden age of biology,” Leroy Hood declared. “I believe that we will learn more about human development and pathology in the next twenty-five years than we have in the past two thousand.” Walter Gilbert said that in this mighty undertaking he beheld a “vision of the grail.” And in the same spirit that had sent legendary medieval knights in search of the most coveted and mysterious prize of Christendom, project director Francis Collins pronounced the unprecedented effort “the most important and the most significant project that humankind has ever mounted.”