Over the last two years, generative AI has smashed our ideas of what intelligence means, what AI can and cannot do, and of our place in the cosmos. A two-thousand year old journey from Aristotle to today, has culminated in a moment where human exceptionalism has finally been challenged. The barriers to human-like thought that people have seen as standing between AI and thinking, keep falling away, and with it, there emerge new attempts to redefine AI.
In a remarkable essay in 2017, “How Aristotle Created the Computer”, Chris Dixon argued that,
The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.
Aristotle, after all, was the father of logic, and logic is involved in truth claims, which is why Boolean logic’s values are called “true” and “false”. Claude Shannon would later discover that “true” and “false”, represented by 1 and 0, could exist within a circuit, a circuit which was either open or closed:
Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)
Millenia since Aristotle’s invention of logic, the computer was born, as logic let physical reality. The story of the computer, of the thinking machine, has largely been seen as being about shifting computing tasks from human beings to machines. Indeed, that is why computers are called computers: inherent in the name is the assumption that computers are calculating aides.
In fact, the man who came up with the term, “artificial intelligence”, John McCarthy, observed that the definition of “artificial intelligence” has changed as machines have learned to do new things. When chess legend, Gary Kasparov, was beaten by DeepBlue, that was the signature achievement of AI, and yet, today, nobody quite thinks of playing against or with Stockfish, or Komodo, as having anything to do with “AI”, it’s simply “computing”. If there was a consensus, it was that AI would be involved with computational tasks. That left enormous scope for human beings to thrive in a world of AI. Not only that, it left some researchers, such as Douglas Hofstadter, the author of the iconic, “Gödel, Escher, Bach”, convinced that AI as it was being pursued, was nothing more than glorified curve fitting and that it could never come close to thinking in a meaningful sense. His essay, “The Shallowness of Google Translate”, is perhaps the best example of this kind of thinking. In it, he argued that, while AI translation tools could perform rudimentary translation tasks, they could not replicate the nuanced and creative understanding of language that belonged to the human translator:
“It’s all about ultrarapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things”.
He was right: AI translation tools do not think, imagine, remember, or understand, and they do not know that words stand for things. That, however, was beside the point. What Hofstadter said of AI translation tools, is true also of generative AI. As Steven Wolfram explained in his essay, “What Is ChatGPT Doing … and Why Does It Work?”, ChatGPT’s success is, to put it crudely, simply about statistical prediction. And yet, ChatGPT is truly creative. In fact, you could argue that its ability to hallucinate says a lot about its ability to be creative. The result has been either that people have sought to redefine what the boundaries of truly human thought are, or, to suggest that perhaps what we have long thought as special, is, in fact, rather mundane.
The first tendency says, okay, so AI can do all this stuff, but it’s still terrible at empathy, terrible at causal thinking, and even its creativity relies on human prompts. This falls short to me, because you could argue that people have always put barriers to what they think AI can do, and AI keeps scaling those barriers. Indeed, Hofstadter himself has gone from seeing barriers to progress to saying,
“It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.”
If generative AI can create, whether it’s telling lies, or making ads, or writing poetry, or painting a landscape, or even satisfy adult sexual desires, it is participating in a fundamental part of what makes us human. That it does these things at a basic level, is beside the point: there are human beings who are bad at these things too. Generative AI has forced everyone to reimagine what it means to be human. In doing so, we are forced to ask what this means for the future of humanity. When Google’s AlphaGo defeated the greatest Go player in the world, Ke Jie, he called its moves “godlike”. There may come a time when we read a novel, watch a movie, or listen to music, that strikes us as similarly godlike. With all of humanity’s wisdom available to it, AI is already so good that in certain areas, it’s difficult to imagine that human beings have a good grasp of what it’s doing.
That reality makes the second common reaction to generative AI seem comical: perhaps writing essays, and making art are not as hard as we have always imagined? We imagined that doing these things was difficult, that AI could never come close, and now we are desperately trying to erect more imaginary barriers to what AI can do. Perhaps, this is the end of human exceptionalism.
Comments