The Chat GPT Debate

The rise of large language models (LLMs), exemplified by Chat GPT, has sparked a fervent debate over the capabilities and constraints of artificial intelligence. Proponents of LLMs argue that they represent a significant step forward in our ability to comprehend language and the world around us. However, opponents argue that LLMs are shackled by their inability to replicate human intelligence and moral reasoning.

At the center of this discussion is the question of whether LLMs can truly grasp the intricacies of language and context in the same way that humans do. Critics contend that although LLMs may be able to perform certain tasks at a superhuman level, like data analysis or language translation, they lack the underlying knowledge and experience that humans possess. This deficiency results in their inability to reason and think critically like humans.

The New York Times article “Noam Chomsky: The False Promise of ChatGPT” delves into the limitations of large language models (LLMs) such as GPT-3, particularly their incapacity to replicate human intelligence and moral reasoning. The article highlights that while LLMs may be adept at specific tasks, they still lack the crucial ability to reason and think critically like humans, due to their insufficient understanding of the world.

The emergence of large language models (LLMs) such as Chat GPT has triggered a spirited debate about the capabilities and limitations of artificial intelligence. Supporters of LLMs argue that they are a remarkable step forward in our ability to understand language and the world around us. Detractors, on the other hand, claim that LLMs are constrained by their inability to replicate human intelligence and moral reasoning.

At the core of this argument lies the question of whether LLMs are truly capable of grasping the nuances of language and context in the same way that humans do. Critics contend that although LLMs may be capable of generating coherent sentences and responses, they lack the profound understanding and intuition that humans possess when it comes to interpreting and analyzing language. Consequently, LLMs may struggle to comprehend and respond to the intricacies of human communication, such as sarcasm, irony, or metaphor.

Additionally, opponents argue that LLMs are unable to engage in moral reasoning in the same way that humans can. Although LLMs can identify patterns and make predictions based on data, they are incapable of participating in the type of ethical reasoning that humans are. This is because human moral reasoning is deeply rooted in our understanding of the world and our experiences, which are difficult, if not impossible, to replicate in a machine. In other words, to truly understand the shared human experience, one must be embodied in the world, just as human beings are. The kind of data found in text is limited to a very specific type of understanding that does not come close to capturing the full range of human experience.

Nevertheless, proponents of LLMs contend that these models represent a significant advancement in our ability to process language and comprehend the world. LLMs can process vast amounts of data and identify patterns and relationships that humans may not be able to discern. This enables LLMs to perform tasks at a superhuman level, such as language translation or data analysis. LLMs can also summarize large amounts of text and provide context that may be difficult for humans to achieve. Perhaps most importantly, LLMs can assist in solving problems. Whether you’re struggling to estimate the cost of a project, determine its duration, identify its steps, or even determine how to complete it, LLMs can assist you, because this type of information is contained within text. You don’t need to have uniquely human experiences to complete tasks such as taxes or legal work.

GPT’s ability to process large amounts of legal documents, statutes, and cases is a potential game-changer for the legal field. It can digest complex legal language and present it in an easily understandable format. The natural language processing capabilities of GPT can also be leveraged to generate legal briefs, contracts, and other legal documents. In essence, GPT can become a virtual legal assistant that can handle tedious tasks and free up time for attorneys to focus on more complex legal work.

Proponents of LLMs argue that they can be trained to understand the nuances of language and context. As LLMs are trained on ever-increasing amounts of data, they can learn to recognize and respond to subtleties of human communication, such as sarcasm or irony. While LLMs may not be able to tell a joke yet, they are getting better at understanding human communication.

Despite these advantages, there is still a significant issue of bias in LLMs, which has been a major concern in the field of artificial intelligence. While LLMs may generate technically correct responses, they may also perpetuate and amplify biases present in the data they are trained on. As a result, LLMs may generate responses that are discriminatory or perpetuate existing biases. However, with proper rules and training, LLMs can be programmed to identify and eliminate biases in the data, potentially making them less biased and more objective than humans in certain tasks.

At the end of the day, LLMs are as good as the data they are fed, and the rules they are trained on. Bad rules or bad data = bad output.

Another key issue in the debate over Chat GPT and LLMs is the idea of improbable explanations. While LLMs may be able to process large amounts of data and make predictions based on patterns, they are not designed to challenge the prevailing paradigm or to come up with new and innovative ideas. This is where human intelligence still has an edge over LLMs. Humans have the ability to make connections between seemingly unrelated concepts and to conceive of improbable explanations that can lead to breakthroughs. In fact, many creative breakthroughs are unexpected, improbable, and often unexplainable or replicable. There is definitely a mystery to how humans have come up with creative solutions over time, and there’s no known way for LLMs to be able to do that.

So, the danger of relying too heavily on LLMs is that we may lose this ability to conceive of improbable explanations. We may become so reliant on these models to provide us with answers that we forget the importance of challenging the prevailing paradigm and thinking outside of the box. This could lead to a stagnation of knowledge and a lack of progress in fields where improbable explanations are necessary.

Imagine, for example, that people read less books because of these models, but in these books, there are certain strands of thought, that could be improbable sources of enlightenment or inspiration. There is a kind of magic to that process, and if we rely on LLMs for information or knowledge, we may cut off that magic, and with it, all the potential for truly creative solutions.

You might wonder whether creativity is simply the offspring of drudgery or really hard work, and you might be right to some extent. But a closer inspection of history shows that many of the most significant breakthroughs in science, technology, and the arts were more haphazard than planned, and more improbable than predictable.

Think of the discovery of penicillin, the creation of the printing press, or the composition of Beethoven’s Ninth Symphony. Each of these accomplishments relied on a combination of serendipitous circumstances, intuitive leaps, and hard work. While LLMs can produce impressive outputs based on existing patterns and data, they lack the ability to make random associations, explore tangential ideas, or generate new insights based on personal experiences or emotions. Therefore, while LLMs may improve productivity and efficiency, they cannot replace the creative ingenuity that has driven human progress for centuries.

To illustrate this point, let us take the example of climate change. The prevailing paradigm in the scientific community is that human activities are contributing to global warming, which will have disastrous consequences if left unchecked. However, there are still many who reject this explanation, either because they do not believe that human activities are responsible for climate change or because they do not believe that the consequences will be as severe as predicted.

If we rely solely on LLMs to provide us with answers to the issue of climate change, we may miss out on important insights and alternative explanations that challenge the prevailing paradigm. For example, there may be other factors at play, such as natural climate cycles, that are not adequately captured by the models.

Furthermore, even within the realm of human activities as a cause of climate change, there may be improbable explanations that are worth considering. For instance, some researchers have suggested that geoengineering, or the manipulation of the Earth’s environment on a large scale, could be a potential solution to climate change. While this idea may seem far-fetched, it highlights the importance of considering all possible solutions, no matter how unlikely they may seem.

Common sense and intuition

Humans can use common sense and intuition to make decisions in situations where there is no clear answer, while LLMs rely solely on data and algorithms. Thinking is not merely a conscious activity, but involves unconscious processes.

The human psyche is a vast and complex landscape, filled with many layers of consciousness and unconsciousness. Thinking, as an integral component of the psyche, is no different. While we may be aware of some of our thought processes, much of our thinking occurs beneath the surface, hidden in the depths of the unconscious mind.

To understand why thinking involves unconscious processes, we must first recognize the true nature of the unconscious. It is not simply a repository of repressed desires and traumas, but rather a vital source of creative energy and intuition. The unconscious mind is the wellspring from which our thoughts and ideas originate, bubbling up to the surface of our awareness when the conditions are right. Without the unconscious, our thinking would be limited and sterile, lacking the depth and richness that comes from tapping into the hidden depths of our psyche.

Thus, thinking involves unconscious processes because the unconscious is the source of our most creative and innovative thoughts. To tap into this wellspring, we must be open to the mysterious and unpredictable workings of the unconscious, allowing it to guide us towards new and unexpected insights. By embracing the unconscious and recognizing its importance in the thinking process, we can unleash our full potential and access a realm of thought that is beyond our conscious awareness.

There is ample evidence from cognitive psychology and neuroscience to support the idea that thinking involves unconscious processing.

One example of such evidence comes from the field of priming, which shows that exposure to a stimulus can affect subsequent behavior or thinking, even when the person is not consciously aware of the stimulus. For example, participants who were briefly exposed to the word “yellow” were faster to identify a picture of a banana than those who were exposed to an unrelated word, even though they were not consciously aware of the word “yellow”. This suggests that the unconscious mind is processing information and affecting behavior without our conscious awareness. (See “The Unconscious Mind” by John F. Kihlstrom and Terrence M. O’Brien, published in American Psychologist in 1995.) https://www.jstor.org/stable/1699849

Another line of evidence comes from brain imaging studies, which have shown that many cognitive processes involve activity in brain regions outside of conscious awareness. For example, studies have shown that the brain areas involved in decision-making and problem-solving are active before a person becomes consciously aware of their decision or solution. This suggests that these processes are happening unconsciously and are only brought to conscious awareness after the fact.

One study that demonstrates the phenomenon of unconscious decision-making is a study by Soon et al. (2008) published in the journal “Nature Neuroscience” titled “Unconscious determinants of free decisions in the human brain.”

In this study, participants were asked to make a simple decision about pressing a button with their left or right hand while their brain activity was being recorded. The researchers found that they could predict which hand the participant would choose to use up to 10 seconds before the participant was consciously aware of their decision. This suggests that the brain was already making the decision unconsciously before the participant was consciously aware of it.https://www.nature.com/articles/nn.2112

Judgment and decision-making:

Humans can make judgments and decisions based on a wide range of factors, including personal values and moral principles, while LLMs can only make decisions based on their programming and data input.

Moral principles are superior to intelligence, and this can be demonstrated with ease.

Firstly, let us consider the simple fact that intelligence, while a noble attribute, can be put to all sorts of nefarious purposes. A cunning and intelligent thief, for example, may use their mental prowess to steal from the innocent and vulnerable. Yet, a person with a strong moral compass would never stoop to such despicable behavior, regardless of their level of intelligence.

Furthermore, intelligence is a cold and analytical quality, whereas moral principles are rooted in warmth and compassion. It is all too easy for an intelligent person to become detached and clinical in their approach to life, whereas a person with strong moral principles is always guided by their heart and their conscience.

But perhaps the most compelling argument for the superiority of moral principles is this: they are timeless and unchanging, while intelligence is fleeting and fallible. A person may be a genius in their own time, but their intelligence may be rendered obsolete by new discoveries or advancements in technology. Moral principles, however, are eternal and unchanging, guiding us through the ages and providing a moral compass for humanity to follow.

Just one example of an accomplished scientist who was amoral is Fritz Haber. Haber was a German chemist who is credited with the invention of the Haber-Bosch process, which is a method for synthesizing ammonia from nitrogen and hydrogen gases. This process is used in the production of fertilizer and has been critical in allowing modern agriculture to support the world’s population. However, Haber was also involved in the development of chemical weapons during World War I and was responsible for the deaths of thousands of soldiers. Despite this, he continued to pursue his scientific research and was awarded the Nobel Prize in Chemistry in 1918 for his work on nitrogen fixation.

Pushback Against AI

Peterson

In a recent tweet, Jordan Peterson shared a screenshot from a Twitter user who had asked Chat GPT to generate a poem for Joe Biden and then for Donald Trump. The resulting poems were markedly different in length, with the Biden poem being significantly longer and more detailed than the Trump poem. Peterson expressed outrage at what he perceived as clear bias on the part of the AI, suggesting that this was an example of the left-leaning tech industry manipulating language to further their political agenda. While some users argued that the difference in length may be due to the fact that the model had more material to work with for the Biden poem, Peterson maintained that this was evidence of a larger pattern of bias in the tech industry. This sparked a debate on social media about the extent to which AI models can be biased and the ethical implications of this phenomenon.

Chomsky

In the Chomsky article I mentioned in the beginning of the article, Chomsky and his coauthors wrote about the shortcomings of AI. These include amorality, or lack of morality, in addition to lacking context and even true originality – these are actually valid arguments that cannot really be debated at the moment. However, the article quickly drew a strong response from the tech community, with many experts pushing back against the less convincing Chomsky claims.

Below is a summary of a Twitter thread by Raphael Milliere (@raphaelmilliere) is a response to an opinion article published in the New York Times by Noam Chomsky. Here is an outline of the main points in the Twitter thread and the specific arguments it is responding to:

  1. Chomsky’s article misrepresents the capabilities of large language models (LLMs): Milliere argues that Chomsky’s article presents a distorted view of what LLMs can actually do. He points out that LLMs are not simply “pattern detectors” that lack any real understanding of language, as Chomsky suggests. Rather, LLMs have demonstrated impressive performance on a wide range of tasks, including natural language processing, language translation, and even creative writing.
  2. LLMs are not inherently biased or dangerous: Chomsky expresses concern in his article about the potential biases and dangers of LLMs. Milliere acknowledges that these are legitimate concerns, but argues that they are not inherent to LLMs themselves. Rather, biases can be introduced at various stages of the model’s development and use, and can be addressed through careful design and training.
  3. LLMs can be used for positive social impact: Despite the concerns raised about LLMs, Milliere argues that they have the potential to be used for positive social impact. For example, they can be used to develop tools for language translation, text summarization, and other tasks that can help break down language barriers and improve access to information.
  4. Chomsky’s views are not representative of the broader AI community: Milliere suggests that Chomsky’s views on LLMs are not shared by many other experts in the field of artificial intelligence. He cites examples of researchers who have published papers on the impressive performance of LLMs, and notes that major tech companies are investing heavily in LLM research and development.

Weinstein

Bret Weinstein wrote: It’s stunning to me that there isn’t even a question about whether to unleash Chat-GPT on the world. The threat it poses can’t be phrased precisely, and even if it could, it can’t be proven. We are rolling the dice—with everything our species has so far achieved—we’re going all in.

He continues:

The obvious rejoinder: the game theory of collective action forces our hand. If “we” don’t deploy it, “they” will. But that’s cop out. We never seriously discussed the possibility of stopping it. Instead it was ‘Hey, check this out!!!’ And off we went, across the event horizon…

Weinstein’s darkly comical tweet captures much of the sentiment over AI. This is supposedly one of the most powerful technologies that humans have ever developed, and seemingly within the span of months, it has been rolled out in lightning speed, with very little consideration of the consequences .

Musk

Then we come to Elon Musk, who has over the years reportedly warned about the dawn of AI. In fact, it’s one of his most used selling points for his company, Neuralink, which aims to ultimately create human cyborgs to counteract some apocalyptic future scenario in which AI goes rogue.

On March 15, Musk posts a link this article: https://arstechnica.com/tech-policy/2023/03/amid-bing-chat-controversy-microsoft-cut-an-ai-ethics-team-report-says/

Essentially, Microsoft has fired its AI safety team.

Conclusion

If I had to paint a very crude (but surprisingly accurate) picture of the landscape right now, it would be divided into two people, primarily. Those are so excited about LLMs and AI that they can hardly contain themselves, and those that are so terrified that they are already preparing for the end of civilization. You would be hard pressed to find too many people in the middle, at least on places like Twitter.

But let’s look at some of the arguments being given. Is AI biased? Undoubtedly. Is this a problem? Yes. Should it be resolved? Immediately.

Can we stop or slow down all these developments, as Weinstein has objected? No. Unfortunately, his follow up tweet is true. There is no potential outcome in which no one develops AI. The amounts of advantages it can afford whichever country or entity that builds it ensures that it will be developed, regardless of the risks. The risk of self-destruction is extremely high, but it will be taken when the alternative is “destruction by the other.”

In Jacques Ellul’s The Technological Society, he hammers home the point that techne, the study of technology in society, is an autonomous force that impels humans to develop it. It doesn’t depend on human deliberate action, it’s simply a matter of competition. If you’re in a contest with someone who’s good at something, you’re both going to try to gain an advantage over each other. And what’s the best way to do that? Maximize efficiency. And the only way to do that is through the perfection of technique. Whether it’s military, economic, artistic, or athletic incentives, the employment of technique is inevitable.

Now, I know what you’re thinking: “But wait, isn’t technology supposed to make our lives easier? Isn’t it supposed to be at our service?” Sure, in theory. But in practice, we become slaves to it. We rely on it so heavily that we can’t function without it. And this is precisely why Ellul’s message is so important: we need to be aware of the fact that our desire for progress and efficiency will always drive us to develop technology further. We can’t stop it, and we probably shouldn’t try to. But we do need to be mindful of its impact on our lives, our values, and our future.

Let me give you an example. Look at social media. It started off as a way to connect with friends and family, and now it’s a multi-billion dollar industry that’s been linked to depression, anxiety, and even suicide. We didn’t intend for it to become like this, but it did. And it’s not just social media – it’s every aspect of our lives that’s been touched by technology. It’s our jobs, our education, our entertainment, our healthcare. We can’t escape it.

So, what can we do? We can start by acknowledging that technology is not inherently good or bad. It’s neutral. It’s up to us to decide how we use it, and what values we prioritize. We need to ask ourselves: is this technology serving us, or are we serving it? And if it’s the latter, we need to reassess our priorities. We need to think about the kind of future we want to create, and what role technology should play in it.

The objection to Chat GPT and AI are valid and Ellul’s message is a sobering one. It reminds us that we’re not in control of technology, it’s in control of us. But that doesn’t mean we’re powerless. We can still make choices, and we can still shape our future. We just need to be mindful of the forces that are driving us forward, and make sure they’re aligned with our values and aspirations. Because in the end, it’s not technology that matters, it’s us.

The most important thing is that this technology is democratized rather than placed in the hands of a few people. Here’s what Open AI has to say about this:

OpenAI was founded with a mission to ensure that the benefits of AI are broadly and evenly distributed to all of humanity. This is a laudable goal given the potential of AI to transform society in profound ways, both positively and negatively. By democratizing access to this technology, OpenAI hopes to prevent a scenario in which AI is only controlled by a small, elite group, which could lead to a concentration of power and resources in the hands of a few, and ultimately stifle innovation and progress. As the company states in its documentation: “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”

To achieve this goal, OpenAI has taken a number of steps, such as making many of its research findings and code publicly available, partnering with organizations in various sectors to develop and deploy AI solutions that are beneficial to society, and advocating for policies and regulations that promote the responsible development and use of AI. While challenges and risks remain, such as the potential for unintended consequences and misuse of AI, the commitment of OpenAI and its partners to ensuring that this technology is used for the common good is a positive step in the right direction.

We can only hope they’re being honest..

In the end, the debate over Chat GPT and LLMs reveals a fundamental tension between progress and caution. On the one hand, LLMs represent a significant leap forward in our ability to process language and make sense of the world around us. They can greatly empower people in developing countries with little access to basic healthcare or education, and provide valuable tools for translation, data analysis, and even creative writing. It can also assist in disaster response efforts, help solve complex environmental challenges, and even aid in space exploration. The possibilities are endless.

Indeed, the democratization of such technologies could be transformative for many societies, bringing greater fairness and prosperity to millions of people. However, we must also be mindful of the risks that come with the development of artificial intelligence. Unchecked, these technologies could lead to serious abuses of power and further widen existing disparities. As such, it is essential that we continue to regulate and monitor these advancements to ensure that they are used for the betterment of humanity, rather than for its destruction.

"A gilded No is more satisfactory than a dry yes" - Gracian