Network Thinking (Complexity: A Guided Tour)

An article in Science magazine discussed how the behavior of ant colonies can be viewed as “computer algorithms,” with each ant running a simple program that enables the colony to perform complex tasks such as deciding when and where to move the nest. Unlike a central leader, ants operate autonomously, basing decisions on interactions with a few nearby ants, leading to a different kind of computation than traditional computers with a central processing unit and random-access memory.

Similarly, a 1994 article by brain researchers questioned whether the brain is a computer, concluding that if we adopt a broader concept of computation, the answer is yes. The brain, like ant colonies, computes through billions of neurons working in parallel without central control.

In the previous chapters, we explored life and evolution in computers. Here, we examine how computation occurs in nature. Generally, computation in nature involves a complex system processing information to succeed or adapt in its environment. To study this, scientists use simplified models like cellular automata.

Cellular Automata

Turing machines formalize computation as transforming input on a tape via set rules to produce output. Modern computers, designed with von Neumann architecture, consist of random access memory (RAM) and a central processing unit (CPU) that executes instructions stored as strings of 1s and 0s.

A cellular automaton (CA) is a grid of cells, each following a rule based on the states of neighboring cells. For instance, imagine a grid of lightbulbs that turn on or off depending on the state of their eight neighbors. The grid updates synchronously, with each bulb following a rule like “turn on if the majority of neighbors are on.”

The variety of possible rules for cellular automata is vast. Each cell update rule determines the next state based on the current states in its neighborhood. These simple components, lacking central control, can exhibit complex, unpredictable behavior.

The Game of Life

John Conway’s “Game of Life,” a cellular automaton invented in 1970, uses a simple two-state system where cells are either alive (on) or dead (off). Conway’s rules involve birth, survival, loneliness, and overcrowding to determine cell states. Despite its simplicity, Life can simulate a universal computer, capable of performing any computation a standard computer can, though inefficiently.

The Four Classes

Physicist Stephen Wolfram studied cellular automata’s behavior, categorizing them into four classes based on their patterns:

  1. Class 1: Uniform final patterns.
  2. Class 2: Cycling between a few patterns.
  3. Class 3: Random-looking behavior.
  4. Class 4: Complex interactions and localized structures.

Wolfram proposed that class 4 rules are capable of universal computation. His assistant Matthew Cook proved that Rule 110, a simple elementary cellular automaton, is universal.

Wolfram’s “New Kind of Science”

In his 2002 book, A New Kind of Science, Wolfram presented the Principle of Computational Equivalence, suggesting:

  1. Natural processes are computations.
  2. Many natural systems can support universal computation.
  3. Universal computation limits the complexity of natural computations.
  4. Natural computations are equivalent in sophistication.

Wolfram speculated that the universe operates on simple cellular automaton-like rules, possibly describable in a few lines of code. His book, despite mixed reviews, brought significant attention to cellular automata research.

Wolfram’s theories remain controversial. The idea that all computations in nature are equivalent in sophistication is debatable, as is the notion of a single rule governing the universe. Nonetheless, his work highlights the potential of simple models to explain complex natural phenomena.

In 1989, I read an article by physicist Norman Packard about using genetic algorithms to design cellular automaton rules, which fascinated me. Although other commitments delayed my work on this idea, I finally had the opportunity to explore it with the help of Peter Hraber and Rajarshi Das at the Santa Fe Institute.

We aimed to evolve cellular automaton rules for the “majority classification” task, where the automaton must determine whether the initial configuration has a majority of on or off states. Unlike von-Neumann-style computers, cellular automata lack central processing and random access memory, making this task challenging.

Using a genetic algorithm, we evolved rules for a one-dimensional cellular automaton with each cell connected to three neighbors on either side. Initially, we considered a simple “local majority vote” rule, which proved ineffective. We then encoded cellular automaton rules as bit strings and used the genetic algorithm to evolve solutions over multiple generations.

The evolved rules, although not immediately understandable at the genomic level, demonstrated correct behavior through complex patterns. Physicist Jim Crutchfield’s conceptual tools helped us interpret these patterns as information-processing structures. We identified “particles” representing boundaries between regions, which carried information and facilitated the computation.

This particle-based analysis explained how the cellular automaton processed information and performed the majority classification task. The genetic algorithm had evolved a rule that could be understood in terms of these information-processing particles.

This approach to understanding computation in decentralized systems could extend to other contexts, including brain computation and the behavior of plant stomata networks, where distributed, particle-like interactions may play a role in information processing.

Computation in Nature

Szilard’s insight into the connection between information and the second law of thermodynamics has significantly impacted science, elevating information to a fundamental component of reality alongside mass and energy. In biology, living systems are often described as information-processing networks. However, the concept of information processing remains somewhat ambiguous outside the formal context of Turing machines and von Neumann-style computers.

This chapter explores the concept of information processing in natural systems, focusing on the immune system, ant colonies, and cellular metabolism. The goal is to identify common principles of information processing in these decentralized systems.

What is Information Processing?

In natural systems, computation is the processing of information to adapt to or succeed in the environment. To understand this, we need to answer:

  • What constitutes information in the system?
  • How is it communicated and processed?
  • How does this information acquire meaning?

Information Processing in Traditional Computers

Turing formalized computation in the 1930s, which led to the design of von Neumann-style computers. In these computers:

  • Information is represented by tape symbols and states of the tape head.
  • It is communicated and processed through reading, writing, and state changes by the tape head, following the program’s rules.
  • The meaning of information comes from human interpretation.

Information Processing in Cellular Automata

For cellular automata, the answers are less clear. For instance, in a cellular automaton evolved to perform majority classification:

  • Information is in the lattice’s state configurations at each time step.
  • It is communicated and processed via neighborhood cell interactions following the automaton’s rule.
  • Meaning is derived from the human interpretation of the task being performed.

High-level descriptions like particles and their interactions can help understand information processing in cellular automata.

Information Processing in Living Systems

The chapter explores information processing in the immune system, ant colonies, and cellular metabolism to identify common principles.

The Immune System

The immune system processes information to protect the body from pathogens. Key components include lymphocytes, which have receptors that bind to specific antigens. The system employs randomness to create diverse receptors and uses feedback mechanisms to determine whether an immune response is warranted.

Ant Colonies

Ant colonies perform complex tasks like foraging and task allocation without central control. Foraging ants leave pheromone trails, which other ants follow based on pheromone concentration. Task allocation involves ants switching tasks based on encounters with other ants performing different tasks.

Cellular Metabolism

Metabolism involves chemical processes within cells, controlled by metabolic pathways and feedback mechanisms. For instance, glycolysis adjusts based on ATP levels, demonstrating how pathways self-regulate to meet the cell’s needs.

Principles of Information Processing in Decentralized Systems

Key principles include:

Communication via Sampling

Information is communicated through spatial and temporal sampling by individual components, like lymphocytes sampling antigens or ants sampling pheromones.

Random Components of Behavior

Randomness allows components to explore a vast space of possibilities. For example, lymphocytes have randomly generated receptors, and ant foragers move randomly to find food.

Fine-Grained Exploration

Complex systems benefit from fine-grained architecture, allowing simultaneous exploration of many possibilities. This parallel terraced scan enables systems to adapt dynamically based on ongoing feedback.

Interplay of Unfocused and Focused Processes

Adaptive systems balance random explorations with focused actions based on perceived needs. For example, the immune system continuously generates new lymphocytes while focusing on those that effectively bind antigens.

How Does Information Acquire Meaning?

Meaning in living systems is tied to survival and natural selection. Information gains meaning through its impact on an organism’s fitness, guiding responses that enhance well-being or reproductive success. This understanding can inspire artificial systems, such as artificial immune systems and ant colony optimization algorithms, to solve real-world problems.

Easy Things Are Hard

Recently, I asked my eight-year-old son, Jake, to put on his socks. He humorously placed them on his head, illustrating a fundamental difference between humans and computers. Despite inherent ambiguities in human language, we usually understand each other due to our sensitivity to context. For instance, when I ask my husband where my keys are, and he simply replies, “yes,” I get annoyed because I expected him to tell me their location.

Modern computers lack this contextual sensitivity. For example, spam filters may fail to identify obvious spam, and search engines often require tailored headlines to improve web accessibility. However, computers excel in certain narrow domains, such as driving vehicles across rugged terrain, diagnosing diseases, solving complex equations, and playing chess.

Despite these advancements, computers still struggle with tasks requiring human-level understanding, such as natural language processing and common sense reasoning. Marvin Minsky aptly described this paradox as “Easy things are hard.” Computers can perform tasks considered highly intelligent by humans but falter at simple tasks easily handled by a child.

Making Analogies

One critical capability missing in computers is the ability to make analogies. Analogy-making involves perceiving abstract similarities between different things despite superficial differences. This skill is fundamental to human intelligence.

Examples of Human Analogies:

  • A child recognizes that dogs in picture books, photos, and real life are instances of the same concept.
  • A person identifies the letter “A” in various typefaces and handwriting styles.
  • Someone understands that saying “I call my parents once a week” implies calling their own parents, not someone else’s.

Other examples include comparing Perrier to the Cadillac of bottled waters or likening the war in Iraq to another Vietnam. Humans are adept at perceiving abstract similarities and making connections, a skill that computers notoriously lack.

My Own Route to Analogy

In the early 1980s, while working as a high-school math teacher, Mitchell read Douglas Hofstadter’s “Gödel, Escher, Bach: an Eternal Golden Braid,” which profoundly influenced me. The book introduced me to the idea that thinking and consciousness emerge from the interactions of simple neurons, analogous to the behavior of cells, ant colonies, and the immune system. Inspired by Hofstadter, I pursued studying artificial intelligence (AI) under his guidance.

Despite initial difficulties, Mitchell eventually connected with Hofstadter, who tasked me with developing a computer program, “Copycat,” designed to make analogies in the letter-string world. The project aimed to emulate the mechanisms responsible for human analogy-making.

Simplifying Analogy

Hofstadter simplified the problem of analogy-making by creating a microworld of letter strings. For example, if “abc” changes to “abd,” what is the analogous change to “ijk”? The typical human response is “ijl,” based on the rule “Replace the rightmost letter by its alphabetic successor.”

This microworld allows for various conceptual slippages, such as changing “iijjkk” to “iijjll” by recognizing groups of letters rather than individual letters.

Being a Copycat

To develop “Copycat,” I worked on mechanisms that enable the program to make human-like analogies. Copycat’s task involves building perceptual structures (descriptions, links, groupings, and correspondences) on top of unprocessed letter strings. These structures represent the program’s understanding of the problem and allow it to formulate a solution.

Key Components of Copycat:

  • Slipnet: A network of concepts with dynamic activation values, representing their relevance to the problem.
  • Workspace: The area where letters of the analogy problem reside and where perceptual structures are built.
  • Codelets: Agents that explore possibilities for perceptual structures and attempt to instantiate them.
  • Temperature: Measures the system’s organization level, influencing the randomness of codelet decisions.

A Run of Copycat

During a run, Copycat starts with a high temperature (indicating disorganization) and progressively builds perceptual structures. As these structures form, the temperature decreases, leading to more deterministic decisions. The program’s task is to construct a coherent perception of the problem, transitioning from a random, parallel processing mode to a focused, serial processing mode.

Copycat illustrates how a system can gradually organize its perception of a situation, balancing exploration and exploitation. This approach mimics biological systems, such as ant colonies and the immune system, which also balance random exploration with focused actions based on feedback.

The ultimate goal of AI is to enable computers to perceive meaning independently, a challenge known as the “barrier of meaning.” While Copycat demonstrates a primitive form of meaning, achieving true human-like understanding remains a distant goal. However, analogy-making is likely to be a crucial component in overcoming this barrier.

Complex Systems and Computer Modeling

Complex systems, due to their intricate nature, are often difficult to understand. Traditional mathematically oriented sciences like physics, chemistry, and mathematical biology have historically focused on studying simpler, idealized systems that are more manageable through mathematics. However, with the advent of fast, inexpensive computers, it has become feasible to construct and experiment with models of systems too complex to be grasped through mathematics alone. The pioneers of computer science, including Alan Turing, John von Neumann, and Norbert Wiener, were driven by the desire to use computers to simulate systems that develop, think, learn, and evolve, thus giving rise to a new way of doing science. The traditional division of science into theory and experiment has now been complemented by a new category: computer simulation.

What Is a Model?

In science, a model is a simplified representation of some “real” phenomenon. While scientists aim to study nature, much of their work involves constructing and studying models of nature. For instance, Newton’s law of gravity—a mathematical model—states that the force of gravity between two objects is proportional to the product of their masses divided by the square of the distance between them. This is a mathematical representation of a particular phenomenon.

Models can also describe how a phenomenon works in terms of simpler concepts, which we call mechanisms. For example, Newton’s law of gravity was initially criticized because it lacked a mechanism to explain gravitational force. Later, Einstein proposed a different mechanistic model in his theory of general relativity, conceptualizing gravity as the effect of material bodies on the shape of four-dimensional space-time.

Models help our minds make sense of observed phenomena by relating them to familiar concepts and are also used to predict future outcomes. For example, Newton’s law of gravity is still used to predict planetary orbits, while Einstein’s general relativity has been used to predict deviations from those orbits.

Idea Models

While computers are often used to run detailed and complex models for applications like weather forecasting or designing automobiles, complex systems research frequently explores idea models. These relatively simple models aim to gain insights into general concepts without making detailed predictions about specific systems. Here are some examples of idea models discussed in this book:

  • Maxwell’s demon: An idea model for exploring the concept of entropy.
  • Turing machine: An idea model for formally defining “definite procedure” and exploring the concept of computation.
  • Logistic model and logistic map: Minimal models for predicting population growth, later used to explore concepts of dynamics and chaos.
  • Von Neumann’s self-reproducing automaton: An idea model for exploring the “logic” of self-reproduction.
  • Genetic algorithm: An idea model for exploring the concept of adaptation, sometimes used as a minimal model of Darwinian evolution.
  • Cellular automaton: An idea model for complex systems in general.
  • Koch curve: An idea model for exploring fractal-like structures.
  • Copycat: An idea model for human analogy-making.

Idea models serve various purposes: they explore general mechanisms underlying complex phenomena, show the plausibility or implausibility of proposed mechanisms, explore the effects of variations on a simple model, and act as “intuition pumps” to prime one’s understanding of complex phenomena. They have also inspired new technologies and computing methods.

Modeling the Evolution of Cooperation

Many biologists and social scientists have used idea models to explore the evolution of cooperation in populations of self-interested individuals. Despite the expectation that evolution would favor selfishness, cooperation is observed at various levels in biological and social realms. For instance, single-celled organisms once cooperated to form multicellular organisms, and ant colonies evolved complex social structures with collective goals.

The Prisoner’s Dilemma

The Prisoner’s Dilemma, invented in the 1950s by game theorists Merrill Flood and Melvin Drescher, is a classic model used to investigate cooperation. It involves two individuals (Alice and Bob) who must decide whether to cooperate or betray each other without knowing the other’s decision. The dilemma illustrates that while mutual cooperation yields better outcomes for both, rational self-interest leads each to betray the other, resulting in worse outcomes for both.

Computer Simulations of the Prisoner’s Dilemma

Robert Axelrod conducted tournaments where computer programs played repeated rounds of the Prisoner’s Dilemma against each other. The simplest strategy, TIT FOR TAT—cooperate on the first move and then mimic the opponent’s last move—proved to be the most successful. This strategy’s success highlighted the importance of niceness, forgiveness, retaliation, and predictability in fostering cooperation.

Extensions of the Prisoner’s Dilemma

Researchers have explored various extensions of the Prisoner’s Dilemma:

  • Adding Social Norms: Axelrod experimented with social norms where players are punished for defecting. Metanorms—punishers of non-punishers—proved effective in sustaining cooperation.
  • Adding Spatial Structure: Martin Nowak and Robert May added spatial structure, finding that cooperation can persist indefinitely when players interact with neighbors on a lattice.

Prospects of Modeling

Computer simulations of idea models like the Prisoner’s Dilemma are valuable additions to experimental science and mathematical theory. These models are crucial when actual experiments are infeasible, and the math is too complex. They help us understand phenomena like the evolution of cooperation and inspire new technologies and mathematical theories.

Computer Modeling Caveats

While models can be highly useful, they have limitations. The replication of results by independent groups is essential to confirm their reliability. Simplified models may have hidden unrealistic assumptions, and their results should be carefully scrutinized. Ultimately, the art of model-building involves excluding irrelevant parts of the problem while ensuring that the model’s insights remain relevant and applicable.

In summary, computer simulations have transformed how we study complex systems, allowing us to explore and understand phenomena that were previously beyond our grasp. They have complemented traditional scientific methods and opened new avenues for research and technological innovation.

References:

Complexity: A Guided Tour – Melanie Mitchell

"A gilded No is more satisfactory than a dry yes" - Gracian