The Limits of AI: Separating Computer Science Fact from Computer Science Fiction
Are you scared of what artificial intelligence (AI) will do in the future? Will AI become so smart it takes over the world? Will we someday be ruled by a super-intelligent computer? Will AI make humans obsolete? What about killer robots? How can a roaming gang of highly weaponized rogue killer robots be contained? What about a swarm of self-aware, autonomous lethal drones? These and other fears of artificial intelligence are popularized in movies like The Terminator and The Matrix and streaming series like Netflix’s creepy Black Mirror. But understanding true artificial intelligence requires separating computer science fact from computer science fiction.
In some ways AI is not that intelligent and never will be. Computer science places hard boundaries on what AI can and cannot do. If AI is used for anything evil, either a human will be at the controls, or it will be the result of an accident due to careless programming or unforeseen circumstances.
Before any topic is discussed, though, terms need defining. The definition of AI varies according to your dictionary. Those in academia tease apart such disciplines as artificial intelligence, machine intelligence, and computational intelligence. In the media, however, artificial intelligence can refer to almost anything a computer does that elicits a response of “Wow!” This “wow” definition will suffice for our purpose, since the limits determining what AI can and can’t do are largely defined by the limitations of computers.
Separating the Computable from the Noncomputable
AI is impressive in that it can do many things faster or better than a human can. Examples include playing chess and facial recognition. (Unlike me, AI never forgets a face.) On the other hand, humans will forever have capabilities beyond the reach of AI. These include creativity, sentience, understanding, and spirituality. Those afraid of AI do not seem to be aware of its performance boundaries. These limitations do not mean AI will never be dangerous. Like electricity or thermonuclear energy, dangers related to AI need mitigating.
What are these limitations? Computers can obviously only do things that are computable. Fundamentally, anything computable must follow a step-by-step procedure. These procedures are called algorithms, and computers can only execute algorithms written in the form of computer code.
Intuitively, we sense that writing computer code to experientially duplicate human love or compassion seems impossible, but can we build the premise of human noncomputability on solid ground? Are there tasks that are provably noncomputable? Yes. In computer science we know this is a fact. There are many problems that are provably nonalgorithmic and therefore beyond the capability of computers.
An example of a nonalgorithmic and therefore noncomputable task is taken from Rice’s theorem. Can properties of computer code be determined by an algorithm? In other words, can computer software, called an oracle, be written to determine what an arbitrary block of computer code will or won’t do? In many important cases, the answer is no. Rice’s theorem says that code for this oracle cannot be written to examine an arbitrary computer program to determine whether the program will at some time print the number 3. If the first line in the software says, “PRINT 3,” then the software will print 3. The key to Rice’s theorem is that the oracle must work for all possible programs. This is provably not possible.
A special case of Rice’s theorem is the Turing halting problem—algorithmically determining whether an arbitrary computer program will stop or run forever. No oracle program can ever be written to determine whether another arbitrary program will run forever or run for a while and halt. The great Alan Turing proved in the 1930s that the halting problem is noncomputable. Biographized in the movie The Imitation Game for his role in cracking the Enigma Code used by the Nazis in WWII, Turing is the father of modern computer science.
Another noncomputable operation is compression. Compression is a common computer task, but given a computer file of an arbitrarily large size, determining how much the file can be compressed is noncomputable. The smallest a file can be compressed is called its Kolmogorov complexity. Above a certain file size, the Kolmogorov complexity cannot be computed. Doing so is provably not algorithmic.
This brings us to the question of whether or not there exist human attributes that are noncomputable. Experiential human love and compassion have already been mentioned. Three more noncomputable human attributes are creativity, sentience, and understanding.
An immediate consequence of the noncomputable attributes of humans is the question of digital immortality. Is it possible to upload yourself to a computer to make a digital copy of yourself? If so, you could live forever as your digital self. But uploading yourself assumes you can be totally duplicated by a sequence of ones and zeros. You would need to be totally computable to be uploaded in toto. But since you’re not, only the computable part of you—that part of you that can add numbers, follow cooking recipes, and mow the lawn—could be uploaded. In terms of any personal interactions, an uploaded you would be boring—like a mechanical robot without any personality.
Nobel laureate Roger Penrose agrees. He says: “Intelligence cannot be present without understanding. No computer has any awareness of what it does.”
Penrose’s case is made using the mathematics of computers as pioneered by Turing. Gregory Chirikjian, former director of the Johns Hopkins robotics lab, agrees with Penrose: “[AI does not display human traits] nor will robots be able to exhibit any form of creativity or sentience.”
Satya Nadella, Microsoft CEO, in his book Hit Refresh, joins in. He writes: “One of the most coveted human skills is creativity, and this won’t change. Machines will enrich and augment our creativity, but the human drive to create will remain central.”
But how can Nadella claim computers of the future will never be creative? Maybe the super-duper computers of tomorrow will overcome the limitations of today’s best number crunchers. The answer to this objection is the Church-Turing thesis. Alan Turing proposed the general-purpose computer that today we call a Turing machine. Turing worked with Alonzo Church, who invented a computing language called the lambda calculus. Church’s lambda calculus was found to be able to do everything a Turing machine could do. In fact, the Turing machine can do anything a modern computer can do. It might take a billion times longer, but the equivalent capability is there. This equivalence is today called the Church-Turing thesis. Both the Turing machine and the computers of the future are restricted to executing algorithms in the form of computer programs. Any claims about noncomputability on computers today and yesterday also apply to computers of the future.
Qualia: Your Sensory Experiences
Qualia are also noncomputable. Qualia are experiences of the senses, such as taste, smell, and touch. Let’s do a thought experiment. You bite into a peeled orange segment. As you bite, the skin on the orange segment bursts, and the juice covers your tongue. You taste sweet orange flavor as you chew and swallow. The deliciousness of the smell and taste entices you to take another bite. The smell and taste of “orange” are examples of qualia.
You are now assigned the job of explaining your orange-eating experience to a man who has had no sense of taste or smell since birth. How can this be done? The man has no idea what anything tastes or smells like.
There can be attempts to explain how an orange smells and tastes. He can be presented with the chemical components of the orange segment. He can understand the physics of chewing and the biology of the taste buds. But it is not possible to communicate to the sense-deprived man the true experience of biting and tasting the juice exploding from the orange segment’s juice vesicles. The qualia of sense and taste are beyond description to those without a shared experience.
If the experience of biting an orange segment cannot be described to a man without the senses of taste and smell, how can a computer programmer expect to duplicate qualia experiences in a computer using computer code? If the true experience can’t be explained, it is nonalgorithmic and therefore noncomputable. There are devices called artificial noses and tongues that detect chemicals like the molecules associated with the orange. But these sensors experience no pleasure when the molecules are detected. They experience no qualia. Duplicating qualia is beyond the capability of AI.
Understanding
Understanding is also beyond the capability of computers and thus of AI. Philosopher John Searle, with his illustration of “the Chinese room,” showed how computers do not understand what they do. His argument, over forty years old, still stands today.
Searle imagined himself in a room with lots of file cabinets. A question written in Chinese is slipped through a slot in the door. Searle does not read Chinese, but the file cabinets contain billions of easily searchable questions, along with their answers, written in Chinese. He searches through the file cabinets until he finds a sheet of paper with writing that exactly matches that of the question being posed. The paper also contains the answer to the question. He copies the answer in Chinese, returns to the door, and slips the answer back through the slot.
From the outside, it appears that the occupant of the Chinese room is fluent in Chinese. Not so. He has no understanding of the Chinese language. He is simply following the algorithm of pattern matching. Computers may look like they understand, but under the hood they are simply executing algorithms. They have no understanding.
In 2011, IBM’s Watson computer famously beat champion contestants on the quiz show Jeopardy. By processing natural language, Watson answered queries faster than the human contestants could do. But Watson was nothing more than the computer equivalent of Searle’s Chinese room, except that, instead of file cabinets, Watson had access to all of Wikipedia and then some. Like Searle in the Chinese room, Watson had no understanding of the meaning of the queries it fielded. It was merely executing an algorithm written by computer programmers.
Creativity
As mentioned, creativity is also not computable.
This limitation is central to debunking the claim that AI software will someday be able to write better software that will in turn write better AI. If that were possible, there could someday be AI intelligence equivalent to that of humans. This is the point in time that Ray Kurzweil refers to as the singularity in his book The Singularity Is Near. According to Kurzweil, after the singularity occurs, we will have problems: AI will write better AI that writes better AI that soon exceeds human intelligence. In assuming this singularity thesis is true, Yuval Harari, author of the bestselling Homo Deus (The Human God), is typical of those forecasting a dystopian future: we will become the pets—or worse, slaves—of super-intelligent AI.
The late brilliant physicist Stephen Hawking bought into this myth. He wrote: “The development of full artificial intelligence could spell the end of the human race. . . . It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
But wait a minute. This idea of AI writing better AI that writes even more powerful AI assumes that computers can be creative. But creativity, we claim, is noncomputable. How can this bold claim about noncomputable creativity be backed up? A creativity test is needed.
Some appeal to the Turing test as the test for computer creativity. In the Turing test there is a two-way text chat. You are one of the participants, and your chat partner will either be another human or a computer. If it turns out to be a computer, but you couldn’t tell whether it was a computer or a human, the Turing test is passed. But like many vaguely stated criteria, the Turing test has been gamed by programmers. Rensselaer Polytechnic Institute’s Selmer Bringsjord notes: “Turing’s dream is . . . coming only on the strength of clever but shallow trickery.”
Bringsjord and his colleagues then proposed the Lovelace test of software creativity. The Lovelace test, named after the first computer programmer, Lady Ada Lovelace, claims that computer creativity will be demonstrated when a machine’s performance is beyond the explanation of its programmer or another expert programmer.
The Lovelace test does not preclude computers generating unexpected or surprising results. For example, the computer program AlphaGo beat world champion Lee Sedol at the difficult board game of Go. At one point, AlphaGo made a surprising move not understood by Go experts. The move turned out to be clever. Isn’t this creativity? No! AlphaGo had been trained to play Go and that’s what it did. Ask AlphaGo how to play Go or even what time it is. There will be no response unless the programmers of AlphaGo have written code to respond to these queries. AlphaGo plays Go for the same reason a Roomba sweeps your floor or a blender blends your smoothie. Each is a machine designed for a specific purpose.
AlphaGo remains a historical landmark in the development of AI, but it is not creative. For AI to create more powerful programs is not possible, and superintelligence generated by AI is therefore a myth. True AI is written in computer languages such as Python and C++. Programs about superintelligence or so-called artificial general intelligence are written on PowerPoint slides.
A Tool: Nothing More, Nothing Less
Here are the takeaways about the limitations of AI to remember. AI is simply a tool. It will be dangerous only if (1) there is an unintended contingency, such as the killing of a pedestrian by a self-driving car, or (2) the AI is controlled by an evil person, like the villain in the movie Angel Has Fallen who released the drone swarm attack on the President.
As with all new technologies, like electricity and thermonuclear energy, care must be taken in the development of AI to ensure its safe and proper use. Frayed electrical wires still burn down houses, and downed electric lines still electrocute people. But since the advantages of electricity far outweigh the dangers, we mitigate the risks through legislation, standards, and best practices. The hope is that AI dangers can likewise be contained.
Human Exceptionalism
These limitations of AI—sensory experience, love, understanding, and creativity—provide evidence of human exceptionalism. As David wrote in Psalm 139, we are “fearfully and wonderfully made.”
Robert J. MarksRobert J. Marks is a Distinguished Professor at Baylor University and is the Director of the Walter Bradley Center for Natural & Artificial Intelligence at Discovery Institute. His authored and coauthored books include Noncomputable You: What You Do That Artificial Intelligence Never Will, The Handbook of Fourier Analysis and Its Applications, Neural Smithing, Introduction to Shannon Sampling and Information Theory, and Introduction to Evolutionary Informatics. He is a Fellow of both the IEEE and the Optical Society of America.
Get Salvo in your inbox! This article originally appeared in Salvo, Issue #64, Spring 2023 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo64/cannot-compute